Test Report: QEMU_macOS 19423

                    
                      1f2c26fb323282b69eee479fdee82bbe44410c3d:2024-08-16:35811
                    
                

Test fail (156/258)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 11.77
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 10.09
27 TestAddons/Setup 10.44
28 TestCertOptions 10.28
29 TestCertExpiration 195.43
30 TestDockerFlags 10.08
31 TestForceSystemdFlag 10.48
32 TestForceSystemdEnv 10.02
38 TestErrorSpam/setup 9.82
47 TestFunctional/serial/StartWithProxy 9.98
49 TestFunctional/serial/SoftStart 5.27
50 TestFunctional/serial/KubeContext 0.06
51 TestFunctional/serial/KubectlGetPods 0.06
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.05
59 TestFunctional/serial/CacheCmd/cache/cache_reload 0.15
61 TestFunctional/serial/MinikubeKubectlCmd 0.77
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.06
63 TestFunctional/serial/ExtraConfig 5.25
64 TestFunctional/serial/ComponentHealth 0.06
65 TestFunctional/serial/LogsCmd 0.08
66 TestFunctional/serial/LogsFileCmd 0.07
67 TestFunctional/serial/InvalidService 0.03
70 TestFunctional/parallel/DashboardCmd 0.2
73 TestFunctional/parallel/StatusCmd 0.12
77 TestFunctional/parallel/ServiceCmdConnect 0.14
79 TestFunctional/parallel/PersistentVolumeClaim 0.03
81 TestFunctional/parallel/SSHCmd 0.12
82 TestFunctional/parallel/CpCmd 0.26
84 TestFunctional/parallel/FileSync 0.07
85 TestFunctional/parallel/CertSync 0.28
89 TestFunctional/parallel/NodeLabels 0.06
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
95 TestFunctional/parallel/Version/components 0.04
96 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
97 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
98 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
99 TestFunctional/parallel/ImageCommands/ImageListYaml 0.03
100 TestFunctional/parallel/ImageCommands/ImageBuild 0.11
102 TestFunctional/parallel/DockerEnv/bash 0.04
103 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
104 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
105 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.04
106 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
107 TestFunctional/parallel/ServiceCmd/List 0.05
108 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
109 TestFunctional/parallel/ServiceCmd/HTTPS 0.04
110 TestFunctional/parallel/ServiceCmd/Format 0.04
111 TestFunctional/parallel/ServiceCmd/URL 0.04
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.08
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 75.14
118 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.29
119 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.28
120 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.13
121 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
123 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.07
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 37.77
141 TestMultiControlPlane/serial/StartCluster 9.85
142 TestMultiControlPlane/serial/DeployApp 71.01
143 TestMultiControlPlane/serial/PingHostFromPods 0.09
144 TestMultiControlPlane/serial/AddWorkerNode 0.07
145 TestMultiControlPlane/serial/NodeLabels 0.06
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.08
147 TestMultiControlPlane/serial/CopyFile 0.06
148 TestMultiControlPlane/serial/StopSecondaryNode 0.11
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.08
150 TestMultiControlPlane/serial/RestartSecondaryNode 38.77
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.08
152 TestMultiControlPlane/serial/RestartClusterKeepsNodes 8.68
153 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
155 TestMultiControlPlane/serial/StopCluster 3.04
156 TestMultiControlPlane/serial/RestartCluster 5.26
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
158 TestMultiControlPlane/serial/AddSecondaryNode 0.07
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.08
162 TestImageBuild/serial/Setup 10.08
165 TestJSONOutput/start/Command 9.81
171 TestJSONOutput/pause/Command 0.08
177 TestJSONOutput/unpause/Command 0.05
194 TestMinikubeProfile 10.22
197 TestMountStart/serial/StartWithMountFirst 10.05
200 TestMultiNode/serial/FreshStart2Nodes 9.82
201 TestMultiNode/serial/DeployApp2Nodes 91.09
202 TestMultiNode/serial/PingHostFrom2Pods 0.09
203 TestMultiNode/serial/AddNode 0.07
204 TestMultiNode/serial/MultiNodeLabels 0.06
205 TestMultiNode/serial/ProfileList 0.08
206 TestMultiNode/serial/CopyFile 0.06
207 TestMultiNode/serial/StopNode 0.14
208 TestMultiNode/serial/StartAfterStop 37.82
209 TestMultiNode/serial/RestartKeepsNodes 8.76
210 TestMultiNode/serial/DeleteNode 0.1
211 TestMultiNode/serial/StopMultiNode 2.01
212 TestMultiNode/serial/RestartMultiNode 5.26
213 TestMultiNode/serial/ValidateNameConflict 20.29
217 TestPreload 10.28
219 TestScheduledStopUnix 10.07
220 TestSkaffold 12.36
223 TestRunningBinaryUpgrade 600.48
225 TestKubernetesUpgrade 18.68
238 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.26
239 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 0.95
241 TestStoppedBinaryUpgrade/Upgrade 574.58
243 TestPause/serial/Start 9.92
253 TestNoKubernetes/serial/StartWithK8s 10.03
254 TestNoKubernetes/serial/StartWithStopK8s 5.31
255 TestNoKubernetes/serial/Start 5.29
259 TestNoKubernetes/serial/StartNoArgs 5.31
261 TestNetworkPlugins/group/auto/Start 9.81
262 TestNetworkPlugins/group/kindnet/Start 9.85
263 TestNetworkPlugins/group/calico/Start 9.8
264 TestNetworkPlugins/group/custom-flannel/Start 9.9
265 TestNetworkPlugins/group/false/Start 9.87
266 TestNetworkPlugins/group/enable-default-cni/Start 9.77
267 TestNetworkPlugins/group/flannel/Start 9.92
268 TestNetworkPlugins/group/bridge/Start 9.71
269 TestNetworkPlugins/group/kubenet/Start 9.9
272 TestStartStop/group/old-k8s-version/serial/FirstStart 10.24
273 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
274 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
277 TestStartStop/group/old-k8s-version/serial/SecondStart 5.24
278 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
279 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
280 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
281 TestStartStop/group/old-k8s-version/serial/Pause 0.1
283 TestStartStop/group/embed-certs/serial/FirstStart 10.02
285 TestStartStop/group/no-preload/serial/FirstStart 11.82
286 TestStartStop/group/embed-certs/serial/DeployApp 0.1
287 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
290 TestStartStop/group/embed-certs/serial/SecondStart 6.24
291 TestStartStop/group/no-preload/serial/DeployApp 0.1
292 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
293 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
294 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
295 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.08
296 TestStartStop/group/embed-certs/serial/Pause 0.1
299 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10.02
301 TestStartStop/group/no-preload/serial/SecondStart 5.92
302 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
303 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.07
304 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
305 TestStartStop/group/no-preload/serial/Pause 0.1
307 TestStartStop/group/newest-cni/serial/FirstStart 11.67
308 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
312 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 6.35
315 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.04
316 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
318 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
319 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
321 TestStartStop/group/newest-cni/serial/SecondStart 5.26
324 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
325 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (11.77s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-222000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-222000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (11.771317958s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"237a24f7-ef52-4dad-ae24-d7c88f610832","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-222000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6dcc0b3a-6d2f-4a2a-9202-74f085f06f13","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19423"}}
	{"specversion":"1.0","id":"b46306af-7fdd-4595-bbb2-a3a0be67bc70","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig"}}
	{"specversion":"1.0","id":"4fd56ee5-25e1-4842-af16-0bca74e55cfa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"cbb4249d-6a6a-4348-b365-e1aa4dcce6e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"537007a0-aa2b-4399-bd8d-c70d47ee25e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube"}}
	{"specversion":"1.0","id":"38659134-1269-4024-9b5e-a19f2cea4cb5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"e5b07729-0921-43b7-bd7f-d4371b269bb3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b4df57ec-fc29-4139-9c43-4bddeef5a421","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"7fe50b5f-e5bc-4b02-8b38-deb5fefc076b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"dfe9a447-43ba-4a29-8264-02c33ceef03d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-222000\" primary control-plane node in \"download-only-222000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"2e6f1e76-2c85-41e3-aca6-0ca270ac7423","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"8b69c75b-e73b-40ea-bcae-9d5f8138b9be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19423-6249/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10780f9c0 0x10780f9c0 0x10780f9c0 0x10780f9c0 0x10780f9c0 0x10780f9c0 0x10780f9c0] Decompressors:map[bz2:0x140000b72b0 gz:0x140000b72b8 tar:0x140000b7260 tar.bz2:0x140000b7270 tar.gz:0x140000b7280 tar.xz:0x140000b7290 tar.zst:0x140000b72a0 tbz2:0x140000b7270 tgz:0x14
0000b7280 txz:0x140000b7290 tzst:0x140000b72a0 xz:0x140000b72c0 zip:0x140000b72d0 zst:0x140000b72c8] Getters:map[file:0x140002ba700 http:0x14000576550 https:0x140005765a0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"43b92b1d-1f88-4585-90b2-d136701d8bfe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:19:22.630328    6748 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:19:22.630470    6748 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:19:22.630473    6748 out.go:358] Setting ErrFile to fd 2...
	I0816 05:19:22.630475    6748 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:19:22.630591    6748 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	W0816 05:19:22.630680    6748 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19423-6249/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19423-6249/.minikube/config/config.json: no such file or directory
	I0816 05:19:22.632045    6748 out.go:352] Setting JSON to true
	I0816 05:19:22.648892    6748 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4731,"bootTime":1723806031,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0816 05:19:22.648956    6748 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:19:22.653161    6748 out.go:97] [download-only-222000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 05:19:22.653273    6748 notify.go:220] Checking for updates...
	W0816 05:19:22.653324    6748 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball: no such file or directory
	I0816 05:19:22.657711    6748 out.go:169] MINIKUBE_LOCATION=19423
	I0816 05:19:22.661170    6748 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:19:22.666794    6748 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 05:19:22.671069    6748 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:19:22.675124    6748 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	W0816 05:19:22.682098    6748 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0816 05:19:22.682342    6748 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:19:22.685831    6748 out.go:97] Using the qemu2 driver based on user configuration
	I0816 05:19:22.685850    6748 start.go:297] selected driver: qemu2
	I0816 05:19:22.685854    6748 start.go:901] validating driver "qemu2" against <nil>
	I0816 05:19:22.685923    6748 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 05:19:22.689406    6748 out.go:169] Automatically selected the socket_vmnet network
	I0816 05:19:22.696222    6748 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0816 05:19:22.696320    6748 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0816 05:19:22.696411    6748 cni.go:84] Creating CNI manager for ""
	I0816 05:19:22.696417    6748 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0816 05:19:22.696478    6748 start.go:340] cluster config:
	{Name:download-only-222000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-222000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:19:22.700322    6748 iso.go:125] acquiring lock: {Name:mkee7fdae783c25a15c40888f5bdc01a171155d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:19:22.704921    6748 out.go:97] Downloading VM boot image ...
	I0816 05:19:22.704949    6748 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso
	I0816 05:19:27.503579    6748 out.go:97] Starting "download-only-222000" primary control-plane node in "download-only-222000" cluster
	I0816 05:19:27.503597    6748 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0816 05:19:27.566380    6748 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0816 05:19:27.566404    6748 cache.go:56] Caching tarball of preloaded images
	I0816 05:19:27.566807    6748 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0816 05:19:27.571039    6748 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0816 05:19:27.571047    6748 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0816 05:19:27.657074    6748 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0816 05:19:33.274734    6748 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0816 05:19:33.274894    6748 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0816 05:19:33.975208    6748 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0816 05:19:33.975404    6748 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/download-only-222000/config.json ...
	I0816 05:19:33.975423    6748 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/download-only-222000/config.json: {Name:mke6c41a7c797054013650b66154396ce0ff2a50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:19:33.976579    6748 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0816 05:19:33.977005    6748 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0816 05:19:34.325038    6748 out.go:193] 
	W0816 05:19:34.331084    6748 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19423-6249/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10780f9c0 0x10780f9c0 0x10780f9c0 0x10780f9c0 0x10780f9c0 0x10780f9c0 0x10780f9c0] Decompressors:map[bz2:0x140000b72b0 gz:0x140000b72b8 tar:0x140000b7260 tar.bz2:0x140000b7270 tar.gz:0x140000b7280 tar.xz:0x140000b7290 tar.zst:0x140000b72a0 tbz2:0x140000b7270 tgz:0x140000b7280 txz:0x140000b7290 tzst:0x140000b72a0 xz:0x140000b72c0 zip:0x140000b72d0 zst:0x140000b72c8] Getters:map[file:0x140002ba700 http:0x14000576550 https:0x140005765a0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0816 05:19:34.331113    6748 out_reason.go:110] 
	W0816 05:19:34.338039    6748 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:19:34.339666    6748 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-222000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (11.77s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19423-6249/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.09s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-189000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-189000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.9734725s)

                                                
                                                
-- stdout --
	* [offline-docker-189000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-189000" primary control-plane node in "offline-docker-189000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-189000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:29:29.916319    8088 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:29:29.916456    8088 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:29:29.916459    8088 out.go:358] Setting ErrFile to fd 2...
	I0816 05:29:29.916462    8088 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:29:29.916599    8088 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:29:29.917898    8088 out.go:352] Setting JSON to false
	I0816 05:29:29.935736    8088 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5338,"bootTime":1723806031,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0816 05:29:29.935831    8088 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:29:29.940193    8088 out.go:177] * [offline-docker-189000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 05:29:29.948116    8088 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:29:29.948141    8088 notify.go:220] Checking for updates...
	I0816 05:29:29.955037    8088 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:29:29.958068    8088 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 05:29:29.961025    8088 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:29:29.964130    8088 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	I0816 05:29:29.967070    8088 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 05:29:29.968633    8088 config.go:182] Loaded profile config "multinode-569000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:29:29.968703    8088 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:29:29.973057    8088 out.go:177] * Using the qemu2 driver based on user configuration
	I0816 05:29:29.979947    8088 start.go:297] selected driver: qemu2
	I0816 05:29:29.979957    8088 start.go:901] validating driver "qemu2" against <nil>
	I0816 05:29:29.979965    8088 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:29:29.981871    8088 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 05:29:29.985046    8088 out.go:177] * Automatically selected the socket_vmnet network
	I0816 05:29:29.988122    8088 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 05:29:29.988139    8088 cni.go:84] Creating CNI manager for ""
	I0816 05:29:29.988146    8088 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 05:29:29.988149    8088 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 05:29:29.988177    8088 start.go:340] cluster config:
	{Name:offline-docker-189000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-189000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:29:29.991828    8088 iso.go:125] acquiring lock: {Name:mkee7fdae783c25a15c40888f5bdc01a171155d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:29:30.000062    8088 out.go:177] * Starting "offline-docker-189000" primary control-plane node in "offline-docker-189000" cluster
	I0816 05:29:30.004094    8088 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:29:30.004127    8088 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 05:29:30.004137    8088 cache.go:56] Caching tarball of preloaded images
	I0816 05:29:30.004214    8088 preload.go:172] Found /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 05:29:30.004220    8088 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 05:29:30.004289    8088 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/offline-docker-189000/config.json ...
	I0816 05:29:30.004300    8088 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/offline-docker-189000/config.json: {Name:mk71ae2999b9d2f2e0b88a5b56c19744d9097b69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:29:30.004525    8088 start.go:360] acquireMachinesLock for offline-docker-189000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:29:30.004562    8088 start.go:364] duration metric: took 26.583µs to acquireMachinesLock for "offline-docker-189000"
	I0816 05:29:30.004579    8088 start.go:93] Provisioning new machine with config: &{Name:offline-docker-189000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-189000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:29:30.004607    8088 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:29:30.008039    8088 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0816 05:29:30.023793    8088 start.go:159] libmachine.API.Create for "offline-docker-189000" (driver="qemu2")
	I0816 05:29:30.023824    8088 client.go:168] LocalClient.Create starting
	I0816 05:29:30.023901    8088 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:29:30.023935    8088 main.go:141] libmachine: Decoding PEM data...
	I0816 05:29:30.023947    8088 main.go:141] libmachine: Parsing certificate...
	I0816 05:29:30.023992    8088 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:29:30.024016    8088 main.go:141] libmachine: Decoding PEM data...
	I0816 05:29:30.024026    8088 main.go:141] libmachine: Parsing certificate...
	I0816 05:29:30.024388    8088 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:29:30.178711    8088 main.go:141] libmachine: Creating SSH key...
	I0816 05:29:30.431007    8088 main.go:141] libmachine: Creating Disk image...
	I0816 05:29:30.431018    8088 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:29:30.431227    8088 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/offline-docker-189000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/offline-docker-189000/disk.qcow2
	I0816 05:29:30.440977    8088 main.go:141] libmachine: STDOUT: 
	I0816 05:29:30.440998    8088 main.go:141] libmachine: STDERR: 
	I0816 05:29:30.441070    8088 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/offline-docker-189000/disk.qcow2 +20000M
	I0816 05:29:30.449493    8088 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:29:30.449516    8088 main.go:141] libmachine: STDERR: 
	I0816 05:29:30.449541    8088 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/offline-docker-189000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/offline-docker-189000/disk.qcow2
	I0816 05:29:30.449549    8088 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:29:30.449566    8088 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:29:30.449599    8088 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/offline-docker-189000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/offline-docker-189000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/offline-docker-189000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:6a:51:e0:2e:c3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/offline-docker-189000/disk.qcow2
	I0816 05:29:30.451370    8088 main.go:141] libmachine: STDOUT: 
	I0816 05:29:30.451390    8088 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:29:30.451410    8088 client.go:171] duration metric: took 427.585542ms to LocalClient.Create
	I0816 05:29:32.453503    8088 start.go:128] duration metric: took 2.448920792s to createHost
	I0816 05:29:32.453530    8088 start.go:83] releasing machines lock for "offline-docker-189000", held for 2.449000375s
	W0816 05:29:32.453543    8088 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:29:32.467845    8088 out.go:177] * Deleting "offline-docker-189000" in qemu2 ...
	W0816 05:29:32.483029    8088 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:29:32.483040    8088 start.go:729] Will try again in 5 seconds ...
	I0816 05:29:37.485245    8088 start.go:360] acquireMachinesLock for offline-docker-189000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:29:37.485799    8088 start.go:364] duration metric: took 419.709µs to acquireMachinesLock for "offline-docker-189000"
	I0816 05:29:37.485943    8088 start.go:93] Provisioning new machine with config: &{Name:offline-docker-189000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-189000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:29:37.486152    8088 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:29:37.494743    8088 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0816 05:29:37.544713    8088 start.go:159] libmachine.API.Create for "offline-docker-189000" (driver="qemu2")
	I0816 05:29:37.544763    8088 client.go:168] LocalClient.Create starting
	I0816 05:29:37.544889    8088 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:29:37.544953    8088 main.go:141] libmachine: Decoding PEM data...
	I0816 05:29:37.544972    8088 main.go:141] libmachine: Parsing certificate...
	I0816 05:29:37.545048    8088 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:29:37.545093    8088 main.go:141] libmachine: Decoding PEM data...
	I0816 05:29:37.545104    8088 main.go:141] libmachine: Parsing certificate...
	I0816 05:29:37.545624    8088 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:29:37.708986    8088 main.go:141] libmachine: Creating SSH key...
	I0816 05:29:37.807766    8088 main.go:141] libmachine: Creating Disk image...
	I0816 05:29:37.807771    8088 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:29:37.807989    8088 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/offline-docker-189000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/offline-docker-189000/disk.qcow2
	I0816 05:29:37.817192    8088 main.go:141] libmachine: STDOUT: 
	I0816 05:29:37.817213    8088 main.go:141] libmachine: STDERR: 
	I0816 05:29:37.817263    8088 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/offline-docker-189000/disk.qcow2 +20000M
	I0816 05:29:37.825074    8088 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:29:37.825090    8088 main.go:141] libmachine: STDERR: 
	I0816 05:29:37.825104    8088 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/offline-docker-189000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/offline-docker-189000/disk.qcow2
	I0816 05:29:37.825112    8088 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:29:37.825121    8088 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:29:37.825159    8088 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/offline-docker-189000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/offline-docker-189000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/offline-docker-189000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:ea:03:52:62:89 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/offline-docker-189000/disk.qcow2
	I0816 05:29:37.826732    8088 main.go:141] libmachine: STDOUT: 
	I0816 05:29:37.826746    8088 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:29:37.826758    8088 client.go:171] duration metric: took 281.994583ms to LocalClient.Create
	I0816 05:29:39.827056    8088 start.go:128] duration metric: took 2.340924042s to createHost
	I0816 05:29:39.827074    8088 start.go:83] releasing machines lock for "offline-docker-189000", held for 2.341291792s
	W0816 05:29:39.827156    8088 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-189000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-189000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:29:39.833520    8088 out.go:201] 
	W0816 05:29:39.837582    8088 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:29:39.837599    8088 out.go:270] * 
	* 
	W0816 05:29:39.838099    8088 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:29:39.850442    8088 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-189000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-08-16 05:29:39.860016 -0700 PDT m=+617.345925918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-189000 -n offline-docker-189000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-189000 -n offline-docker-189000: exit status 7 (33.02325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-189000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-189000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-189000
--- FAIL: TestOffline (10.09s)

                                                
                                    
x
+
TestAddons/Setup (10.44s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-851000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-851000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.437924959s)

                                                
                                                
-- stdout --
	* [addons-851000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-851000" primary control-plane node in "addons-851000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-851000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:19:42.942930    6824 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:19:42.943069    6824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:19:42.943071    6824 out.go:358] Setting ErrFile to fd 2...
	I0816 05:19:42.943074    6824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:19:42.943183    6824 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:19:42.944182    6824 out.go:352] Setting JSON to false
	I0816 05:19:42.959919    6824 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4751,"bootTime":1723806031,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0816 05:19:42.959988    6824 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:19:42.962477    6824 out.go:177] * [addons-851000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 05:19:42.969580    6824 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:19:42.969628    6824 notify.go:220] Checking for updates...
	I0816 05:19:42.976567    6824 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:19:42.980576    6824 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 05:19:42.983572    6824 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:19:42.986550    6824 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	I0816 05:19:42.989562    6824 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 05:19:42.992687    6824 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:19:42.996529    6824 out.go:177] * Using the qemu2 driver based on user configuration
	I0816 05:19:43.002517    6824 start.go:297] selected driver: qemu2
	I0816 05:19:43.002525    6824 start.go:901] validating driver "qemu2" against <nil>
	I0816 05:19:43.002530    6824 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:19:43.004820    6824 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 05:19:43.007518    6824 out.go:177] * Automatically selected the socket_vmnet network
	I0816 05:19:43.010647    6824 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 05:19:43.010676    6824 cni.go:84] Creating CNI manager for ""
	I0816 05:19:43.010683    6824 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 05:19:43.010687    6824 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 05:19:43.010718    6824 start.go:340] cluster config:
	{Name:addons-851000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-851000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:19:43.014232    6824 iso.go:125] acquiring lock: {Name:mkee7fdae783c25a15c40888f5bdc01a171155d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:19:43.022534    6824 out.go:177] * Starting "addons-851000" primary control-plane node in "addons-851000" cluster
	I0816 05:19:43.026563    6824 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:19:43.026576    6824 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 05:19:43.026584    6824 cache.go:56] Caching tarball of preloaded images
	I0816 05:19:43.026651    6824 preload.go:172] Found /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 05:19:43.026657    6824 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 05:19:43.026856    6824 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/addons-851000/config.json ...
	I0816 05:19:43.026867    6824 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/addons-851000/config.json: {Name:mk35d80f48d68362c0ed21f65ed66ce2fae08eae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:19:43.027242    6824 start.go:360] acquireMachinesLock for addons-851000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:19:43.027305    6824 start.go:364] duration metric: took 56.875µs to acquireMachinesLock for "addons-851000"
	I0816 05:19:43.027317    6824 start.go:93] Provisioning new machine with config: &{Name:addons-851000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:addons-851000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:19:43.027344    6824 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:19:43.030557    6824 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0816 05:19:43.048308    6824 start.go:159] libmachine.API.Create for "addons-851000" (driver="qemu2")
	I0816 05:19:43.048328    6824 client.go:168] LocalClient.Create starting
	I0816 05:19:43.048447    6824 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:19:43.258967    6824 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:19:43.304681    6824 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:19:43.649360    6824 main.go:141] libmachine: Creating SSH key...
	I0816 05:19:43.758284    6824 main.go:141] libmachine: Creating Disk image...
	I0816 05:19:43.758293    6824 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:19:43.758685    6824 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/addons-851000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/addons-851000/disk.qcow2
	I0816 05:19:43.767953    6824 main.go:141] libmachine: STDOUT: 
	I0816 05:19:43.767972    6824 main.go:141] libmachine: STDERR: 
	I0816 05:19:43.768018    6824 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/addons-851000/disk.qcow2 +20000M
	I0816 05:19:43.775844    6824 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:19:43.775858    6824 main.go:141] libmachine: STDERR: 
	I0816 05:19:43.775873    6824 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/addons-851000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/addons-851000/disk.qcow2
	I0816 05:19:43.775876    6824 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:19:43.775905    6824 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:19:43.775944    6824 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/addons-851000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/addons-851000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/addons-851000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:91:d3:75:8c:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/addons-851000/disk.qcow2
	I0816 05:19:43.777479    6824 main.go:141] libmachine: STDOUT: 
	I0816 05:19:43.777497    6824 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:19:43.777514    6824 client.go:171] duration metric: took 729.187708ms to LocalClient.Create
	I0816 05:19:45.779673    6824 start.go:128] duration metric: took 2.752334084s to createHost
	I0816 05:19:45.779719    6824 start.go:83] releasing machines lock for "addons-851000", held for 2.752432875s
	W0816 05:19:45.779783    6824 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:19:45.790911    6824 out.go:177] * Deleting "addons-851000" in qemu2 ...
	W0816 05:19:45.824768    6824 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:19:45.824800    6824 start.go:729] Will try again in 5 seconds ...
	I0816 05:19:50.827028    6824 start.go:360] acquireMachinesLock for addons-851000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:19:50.827544    6824 start.go:364] duration metric: took 378.041µs to acquireMachinesLock for "addons-851000"
	I0816 05:19:50.827694    6824 start.go:93] Provisioning new machine with config: &{Name:addons-851000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:addons-851000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:19:50.827976    6824 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:19:50.843542    6824 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0816 05:19:50.893870    6824 start.go:159] libmachine.API.Create for "addons-851000" (driver="qemu2")
	I0816 05:19:50.893919    6824 client.go:168] LocalClient.Create starting
	I0816 05:19:50.894040    6824 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:19:50.894112    6824 main.go:141] libmachine: Decoding PEM data...
	I0816 05:19:50.894128    6824 main.go:141] libmachine: Parsing certificate...
	I0816 05:19:50.894216    6824 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:19:50.894264    6824 main.go:141] libmachine: Decoding PEM data...
	I0816 05:19:50.894285    6824 main.go:141] libmachine: Parsing certificate...
	I0816 05:19:50.894944    6824 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:19:51.109289    6824 main.go:141] libmachine: Creating SSH key...
	I0816 05:19:51.290416    6824 main.go:141] libmachine: Creating Disk image...
	I0816 05:19:51.290428    6824 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:19:51.290664    6824 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/addons-851000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/addons-851000/disk.qcow2
	I0816 05:19:51.300208    6824 main.go:141] libmachine: STDOUT: 
	I0816 05:19:51.300227    6824 main.go:141] libmachine: STDERR: 
	I0816 05:19:51.300276    6824 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/addons-851000/disk.qcow2 +20000M
	I0816 05:19:51.308161    6824 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:19:51.308181    6824 main.go:141] libmachine: STDERR: 
	I0816 05:19:51.308196    6824 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/addons-851000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/addons-851000/disk.qcow2
	I0816 05:19:51.308203    6824 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:19:51.308212    6824 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:19:51.308247    6824 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/addons-851000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/addons-851000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/addons-851000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:1e:e6:ee:9c:c7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/addons-851000/disk.qcow2
	I0816 05:19:51.309836    6824 main.go:141] libmachine: STDOUT: 
	I0816 05:19:51.309850    6824 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:19:51.309863    6824 client.go:171] duration metric: took 415.943458ms to LocalClient.Create
	I0816 05:19:53.311562    6824 start.go:128] duration metric: took 2.4835375s to createHost
	I0816 05:19:53.311705    6824 start.go:83] releasing machines lock for "addons-851000", held for 2.484162s
	W0816 05:19:53.312114    6824 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p addons-851000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-851000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:19:53.321590    6824 out.go:201] 
	W0816 05:19:53.328696    6824 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:19:53.328730    6824 out.go:270] * 
	* 
	W0816 05:19:53.331540    6824 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:19:53.338658    6824 out.go:201] 

                                                
                                                
** /stderr **
addons_test.go:112: out/minikube-darwin-arm64 start -p addons-851000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.44s)

                                                
                                    
x
+
TestCertOptions (10.28s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-804000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-804000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (10.013255541s)

                                                
                                                
-- stdout --
	* [cert-options-804000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-804000" primary control-plane node in "cert-options-804000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-804000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-804000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-804000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-804000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-804000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (80.105084ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-804000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-804000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-804000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-804000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-804000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-804000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (41.804917ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-804000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-804000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-804000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-804000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-804000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-08-16 05:30:10.247688 -0700 PDT m=+647.734099376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-804000 -n cert-options-804000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-804000 -n cert-options-804000: exit status 7 (30.7485ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-804000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-804000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-804000
--- FAIL: TestCertOptions (10.28s)

                                                
                                    
x
+
TestCertExpiration (195.43s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-169000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-169000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.013366625s)

                                                
                                                
-- stdout --
	* [cert-expiration-169000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-169000" primary control-plane node in "cert-expiration-169000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-169000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-169000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-169000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-169000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-169000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.248662375s)

                                                
                                                
-- stdout --
	* [cert-expiration-169000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-169000" primary control-plane node in "cert-expiration-169000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-169000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-169000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-169000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-169000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-169000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-169000" primary control-plane node in "cert-expiration-169000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-169000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-169000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-169000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-08-16 05:33:10.242507 -0700 PDT m=+827.731886168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-169000 -n cert-expiration-169000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-169000 -n cert-expiration-169000: exit status 7 (56.700209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-169000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-169000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-169000
--- FAIL: TestCertExpiration (195.43s)

                                                
                                    
x
+
TestDockerFlags (10.08s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-193000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-193000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.852827666s)

                                                
                                                
-- stdout --
	* [docker-flags-193000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-193000" primary control-plane node in "docker-flags-193000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-193000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:29:50.024837    8275 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:29:50.024966    8275 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:29:50.024969    8275 out.go:358] Setting ErrFile to fd 2...
	I0816 05:29:50.024972    8275 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:29:50.025086    8275 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:29:50.026125    8275 out.go:352] Setting JSON to false
	I0816 05:29:50.042175    8275 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5359,"bootTime":1723806031,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0816 05:29:50.042249    8275 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:29:50.049213    8275 out.go:177] * [docker-flags-193000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 05:29:50.057194    8275 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:29:50.057247    8275 notify.go:220] Checking for updates...
	I0816 05:29:50.066183    8275 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:29:50.069138    8275 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 05:29:50.072198    8275 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:29:50.076169    8275 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	I0816 05:29:50.079150    8275 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 05:29:50.083502    8275 config.go:182] Loaded profile config "force-systemd-flag-403000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:29:50.083568    8275 config.go:182] Loaded profile config "multinode-569000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:29:50.083619    8275 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:29:50.088170    8275 out.go:177] * Using the qemu2 driver based on user configuration
	I0816 05:29:50.095179    8275 start.go:297] selected driver: qemu2
	I0816 05:29:50.095185    8275 start.go:901] validating driver "qemu2" against <nil>
	I0816 05:29:50.095192    8275 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:29:50.097603    8275 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 05:29:50.101188    8275 out.go:177] * Automatically selected the socket_vmnet network
	I0816 05:29:50.104225    8275 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0816 05:29:50.104264    8275 cni.go:84] Creating CNI manager for ""
	I0816 05:29:50.104271    8275 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 05:29:50.104285    8275 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 05:29:50.104316    8275 start.go:340] cluster config:
	{Name:docker-flags-193000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-193000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:29:50.108290    8275 iso.go:125] acquiring lock: {Name:mkee7fdae783c25a15c40888f5bdc01a171155d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:29:50.116198    8275 out.go:177] * Starting "docker-flags-193000" primary control-plane node in "docker-flags-193000" cluster
	I0816 05:29:50.120185    8275 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:29:50.120203    8275 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 05:29:50.120217    8275 cache.go:56] Caching tarball of preloaded images
	I0816 05:29:50.120309    8275 preload.go:172] Found /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 05:29:50.120320    8275 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 05:29:50.120391    8275 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/docker-flags-193000/config.json ...
	I0816 05:29:50.120403    8275 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/docker-flags-193000/config.json: {Name:mk32769c2737751967495bce95f42ce6cc95d229 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:29:50.120634    8275 start.go:360] acquireMachinesLock for docker-flags-193000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:29:50.120675    8275 start.go:364] duration metric: took 30.208µs to acquireMachinesLock for "docker-flags-193000"
	I0816 05:29:50.120689    8275 start.go:93] Provisioning new machine with config: &{Name:docker-flags-193000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-193000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:29:50.120719    8275 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:29:50.128136    8275 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0816 05:29:50.146769    8275 start.go:159] libmachine.API.Create for "docker-flags-193000" (driver="qemu2")
	I0816 05:29:50.146800    8275 client.go:168] LocalClient.Create starting
	I0816 05:29:50.146866    8275 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:29:50.146896    8275 main.go:141] libmachine: Decoding PEM data...
	I0816 05:29:50.146905    8275 main.go:141] libmachine: Parsing certificate...
	I0816 05:29:50.146944    8275 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:29:50.146968    8275 main.go:141] libmachine: Decoding PEM data...
	I0816 05:29:50.146977    8275 main.go:141] libmachine: Parsing certificate...
	I0816 05:29:50.147319    8275 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:29:50.300495    8275 main.go:141] libmachine: Creating SSH key...
	I0816 05:29:50.397975    8275 main.go:141] libmachine: Creating Disk image...
	I0816 05:29:50.397981    8275 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:29:50.398199    8275 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/docker-flags-193000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/docker-flags-193000/disk.qcow2
	I0816 05:29:50.407445    8275 main.go:141] libmachine: STDOUT: 
	I0816 05:29:50.407463    8275 main.go:141] libmachine: STDERR: 
	I0816 05:29:50.407505    8275 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/docker-flags-193000/disk.qcow2 +20000M
	I0816 05:29:50.415294    8275 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:29:50.415309    8275 main.go:141] libmachine: STDERR: 
	I0816 05:29:50.415324    8275 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/docker-flags-193000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/docker-flags-193000/disk.qcow2
	I0816 05:29:50.415329    8275 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:29:50.415341    8275 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:29:50.415369    8275 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/docker-flags-193000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/docker-flags-193000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/docker-flags-193000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:7e:da:f4:b4:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/docker-flags-193000/disk.qcow2
	I0816 05:29:50.416945    8275 main.go:141] libmachine: STDOUT: 
	I0816 05:29:50.416961    8275 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:29:50.416978    8275 client.go:171] duration metric: took 270.176667ms to LocalClient.Create
	I0816 05:29:52.419126    8275 start.go:128] duration metric: took 2.298424041s to createHost
	I0816 05:29:52.419181    8275 start.go:83] releasing machines lock for "docker-flags-193000", held for 2.298533917s
	W0816 05:29:52.419250    8275 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:29:52.443486    8275 out.go:177] * Deleting "docker-flags-193000" in qemu2 ...
	W0816 05:29:52.465292    8275 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:29:52.465312    8275 start.go:729] Will try again in 5 seconds ...
	I0816 05:29:57.467391    8275 start.go:360] acquireMachinesLock for docker-flags-193000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:29:57.467737    8275 start.go:364] duration metric: took 226.5µs to acquireMachinesLock for "docker-flags-193000"
	I0816 05:29:57.467840    8275 start.go:93] Provisioning new machine with config: &{Name:docker-flags-193000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-193000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:29:57.468059    8275 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:29:57.476684    8275 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0816 05:29:57.519477    8275 start.go:159] libmachine.API.Create for "docker-flags-193000" (driver="qemu2")
	I0816 05:29:57.519529    8275 client.go:168] LocalClient.Create starting
	I0816 05:29:57.519635    8275 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:29:57.519698    8275 main.go:141] libmachine: Decoding PEM data...
	I0816 05:29:57.519714    8275 main.go:141] libmachine: Parsing certificate...
	I0816 05:29:57.519781    8275 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:29:57.519823    8275 main.go:141] libmachine: Decoding PEM data...
	I0816 05:29:57.519834    8275 main.go:141] libmachine: Parsing certificate...
	I0816 05:29:57.520970    8275 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:29:57.689413    8275 main.go:141] libmachine: Creating SSH key...
	I0816 05:29:57.785974    8275 main.go:141] libmachine: Creating Disk image...
	I0816 05:29:57.785979    8275 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:29:57.786201    8275 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/docker-flags-193000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/docker-flags-193000/disk.qcow2
	I0816 05:29:57.795654    8275 main.go:141] libmachine: STDOUT: 
	I0816 05:29:57.795674    8275 main.go:141] libmachine: STDERR: 
	I0816 05:29:57.795726    8275 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/docker-flags-193000/disk.qcow2 +20000M
	I0816 05:29:57.803552    8275 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:29:57.803573    8275 main.go:141] libmachine: STDERR: 
	I0816 05:29:57.803582    8275 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/docker-flags-193000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/docker-flags-193000/disk.qcow2
	I0816 05:29:57.803587    8275 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:29:57.803594    8275 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:29:57.803625    8275 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/docker-flags-193000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/docker-flags-193000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/docker-flags-193000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:19:ec:36:17:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/docker-flags-193000/disk.qcow2
	I0816 05:29:57.805214    8275 main.go:141] libmachine: STDOUT: 
	I0816 05:29:57.805231    8275 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:29:57.805244    8275 client.go:171] duration metric: took 285.713833ms to LocalClient.Create
	I0816 05:29:59.807390    8275 start.go:128] duration metric: took 2.339341417s to createHost
	I0816 05:29:59.807439    8275 start.go:83] releasing machines lock for "docker-flags-193000", held for 2.339720667s
	W0816 05:29:59.807910    8275 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-193000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-193000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:29:59.817621    8275 out.go:201] 
	W0816 05:29:59.824621    8275 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:29:59.824644    8275 out.go:270] * 
	* 
	W0816 05:29:59.827200    8275 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:29:59.835608    8275 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-193000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-193000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-193000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (76.730292ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-193000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-193000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-193000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-193000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-193000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-193000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-193000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-193000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (43.859ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-193000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-193000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-193000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-193000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-193000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-08-16 05:29:59.972832 -0700 PDT m=+637.459073418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-193000 -n docker-flags-193000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-193000 -n docker-flags-193000: exit status 7 (28.824417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-193000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-193000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-193000
--- FAIL: TestDockerFlags (10.08s)

                                                
                                    
x
+
TestForceSystemdFlag (10.48s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-403000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-403000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.289320625s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-403000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-403000" primary control-plane node in "force-systemd-flag-403000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-403000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:29:44.527383    8254 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:29:44.527510    8254 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:29:44.527514    8254 out.go:358] Setting ErrFile to fd 2...
	I0816 05:29:44.527516    8254 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:29:44.527649    8254 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:29:44.528821    8254 out.go:352] Setting JSON to false
	I0816 05:29:44.544663    8254 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5353,"bootTime":1723806031,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0816 05:29:44.544728    8254 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:29:44.551672    8254 out.go:177] * [force-systemd-flag-403000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 05:29:44.558707    8254 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:29:44.558780    8254 notify.go:220] Checking for updates...
	I0816 05:29:44.565663    8254 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:29:44.569683    8254 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 05:29:44.572666    8254 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:29:44.575693    8254 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	I0816 05:29:44.578700    8254 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 05:29:44.580507    8254 config.go:182] Loaded profile config "force-systemd-env-384000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:29:44.580583    8254 config.go:182] Loaded profile config "multinode-569000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:29:44.580636    8254 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:29:44.584660    8254 out.go:177] * Using the qemu2 driver based on user configuration
	I0816 05:29:44.591547    8254 start.go:297] selected driver: qemu2
	I0816 05:29:44.591554    8254 start.go:901] validating driver "qemu2" against <nil>
	I0816 05:29:44.591560    8254 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:29:44.593760    8254 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 05:29:44.596673    8254 out.go:177] * Automatically selected the socket_vmnet network
	I0816 05:29:44.599913    8254 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0816 05:29:44.599942    8254 cni.go:84] Creating CNI manager for ""
	I0816 05:29:44.599948    8254 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 05:29:44.599952    8254 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 05:29:44.599982    8254 start.go:340] cluster config:
	{Name:force-systemd-flag-403000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-403000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:29:44.603729    8254 iso.go:125] acquiring lock: {Name:mkee7fdae783c25a15c40888f5bdc01a171155d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:29:44.611664    8254 out.go:177] * Starting "force-systemd-flag-403000" primary control-plane node in "force-systemd-flag-403000" cluster
	I0816 05:29:44.615670    8254 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:29:44.615683    8254 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 05:29:44.615694    8254 cache.go:56] Caching tarball of preloaded images
	I0816 05:29:44.615750    8254 preload.go:172] Found /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 05:29:44.615755    8254 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 05:29:44.615818    8254 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/force-systemd-flag-403000/config.json ...
	I0816 05:29:44.615830    8254 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/force-systemd-flag-403000/config.json: {Name:mk263341fd82ab0eec09faa4eda38c01e3bf9044 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:29:44.616055    8254 start.go:360] acquireMachinesLock for force-systemd-flag-403000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:29:44.616094    8254 start.go:364] duration metric: took 29.459µs to acquireMachinesLock for "force-systemd-flag-403000"
	I0816 05:29:44.616108    8254 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-403000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-403000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:29:44.616137    8254 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:29:44.619669    8254 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0816 05:29:44.637575    8254 start.go:159] libmachine.API.Create for "force-systemd-flag-403000" (driver="qemu2")
	I0816 05:29:44.637606    8254 client.go:168] LocalClient.Create starting
	I0816 05:29:44.637671    8254 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:29:44.637707    8254 main.go:141] libmachine: Decoding PEM data...
	I0816 05:29:44.637717    8254 main.go:141] libmachine: Parsing certificate...
	I0816 05:29:44.637757    8254 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:29:44.637782    8254 main.go:141] libmachine: Decoding PEM data...
	I0816 05:29:44.637791    8254 main.go:141] libmachine: Parsing certificate...
	I0816 05:29:44.638147    8254 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:29:44.790697    8254 main.go:141] libmachine: Creating SSH key...
	I0816 05:29:44.946411    8254 main.go:141] libmachine: Creating Disk image...
	I0816 05:29:44.946417    8254 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:29:44.946636    8254 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/force-systemd-flag-403000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/force-systemd-flag-403000/disk.qcow2
	I0816 05:29:44.956012    8254 main.go:141] libmachine: STDOUT: 
	I0816 05:29:44.956030    8254 main.go:141] libmachine: STDERR: 
	I0816 05:29:44.956082    8254 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/force-systemd-flag-403000/disk.qcow2 +20000M
	I0816 05:29:44.963875    8254 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:29:44.963893    8254 main.go:141] libmachine: STDERR: 
	I0816 05:29:44.963926    8254 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/force-systemd-flag-403000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/force-systemd-flag-403000/disk.qcow2
	I0816 05:29:44.963932    8254 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:29:44.963943    8254 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:29:44.963968    8254 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/force-systemd-flag-403000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/force-systemd-flag-403000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/force-systemd-flag-403000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:85:6f:a7:c6:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/force-systemd-flag-403000/disk.qcow2
	I0816 05:29:44.965536    8254 main.go:141] libmachine: STDOUT: 
	I0816 05:29:44.965551    8254 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:29:44.965568    8254 client.go:171] duration metric: took 327.963625ms to LocalClient.Create
	I0816 05:29:46.967721    8254 start.go:128] duration metric: took 2.351600667s to createHost
	I0816 05:29:46.967768    8254 start.go:83] releasing machines lock for "force-systemd-flag-403000", held for 2.351703625s
	W0816 05:29:46.967832    8254 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:29:46.974131    8254 out.go:177] * Deleting "force-systemd-flag-403000" in qemu2 ...
	W0816 05:29:47.007885    8254 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:29:47.007908    8254 start.go:729] Will try again in 5 seconds ...
	I0816 05:29:52.010045    8254 start.go:360] acquireMachinesLock for force-systemd-flag-403000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:29:52.419374    8254 start.go:364] duration metric: took 409.170667ms to acquireMachinesLock for "force-systemd-flag-403000"
	I0816 05:29:52.419479    8254 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-403000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-403000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:29:52.419767    8254 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:29:52.433441    8254 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0816 05:29:52.483215    8254 start.go:159] libmachine.API.Create for "force-systemd-flag-403000" (driver="qemu2")
	I0816 05:29:52.483285    8254 client.go:168] LocalClient.Create starting
	I0816 05:29:52.483482    8254 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:29:52.483550    8254 main.go:141] libmachine: Decoding PEM data...
	I0816 05:29:52.483565    8254 main.go:141] libmachine: Parsing certificate...
	I0816 05:29:52.483631    8254 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:29:52.483675    8254 main.go:141] libmachine: Decoding PEM data...
	I0816 05:29:52.483691    8254 main.go:141] libmachine: Parsing certificate...
	I0816 05:29:52.484194    8254 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:29:52.645420    8254 main.go:141] libmachine: Creating SSH key...
	I0816 05:29:52.713768    8254 main.go:141] libmachine: Creating Disk image...
	I0816 05:29:52.713774    8254 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:29:52.714006    8254 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/force-systemd-flag-403000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/force-systemd-flag-403000/disk.qcow2
	I0816 05:29:52.723474    8254 main.go:141] libmachine: STDOUT: 
	I0816 05:29:52.723496    8254 main.go:141] libmachine: STDERR: 
	I0816 05:29:52.723546    8254 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/force-systemd-flag-403000/disk.qcow2 +20000M
	I0816 05:29:52.731384    8254 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:29:52.731404    8254 main.go:141] libmachine: STDERR: 
	I0816 05:29:52.731414    8254 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/force-systemd-flag-403000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/force-systemd-flag-403000/disk.qcow2
	I0816 05:29:52.731419    8254 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:29:52.731426    8254 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:29:52.731460    8254 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/force-systemd-flag-403000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/force-systemd-flag-403000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/force-systemd-flag-403000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:dd:b3:6d:b1:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/force-systemd-flag-403000/disk.qcow2
	I0816 05:29:52.733073    8254 main.go:141] libmachine: STDOUT: 
	I0816 05:29:52.733088    8254 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:29:52.733104    8254 client.go:171] duration metric: took 249.802458ms to LocalClient.Create
	I0816 05:29:54.735348    8254 start.go:128] duration metric: took 2.315561709s to createHost
	I0816 05:29:54.735408    8254 start.go:83] releasing machines lock for "force-systemd-flag-403000", held for 2.316031166s
	W0816 05:29:54.735745    8254 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-403000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-403000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:29:54.757273    8254 out.go:201] 
	W0816 05:29:54.761352    8254 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:29:54.761389    8254 out.go:270] * 
	* 
	W0816 05:29:54.764213    8254 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:29:54.774079    8254 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-403000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-403000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-403000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (77.643291ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-403000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-403000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-403000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-08-16 05:29:54.869443 -0700 PDT m=+632.355601168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-403000 -n force-systemd-flag-403000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-403000 -n force-systemd-flag-403000: exit status 7 (34.274541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-403000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-403000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-403000
--- FAIL: TestForceSystemdFlag (10.48s)

                                                
                                    
x
+
TestForceSystemdEnv (10.02s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-384000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-384000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.837697042s)

                                                
                                                
-- stdout --
	* [force-systemd-env-384000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-384000" primary control-plane node in "force-systemd-env-384000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-384000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:29:40.001982    8234 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:29:40.002137    8234 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:29:40.002140    8234 out.go:358] Setting ErrFile to fd 2...
	I0816 05:29:40.002142    8234 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:29:40.002276    8234 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:29:40.003382    8234 out.go:352] Setting JSON to false
	I0816 05:29:40.019929    8234 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5349,"bootTime":1723806031,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0816 05:29:40.020004    8234 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:29:40.026638    8234 out.go:177] * [force-systemd-env-384000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 05:29:40.033588    8234 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:29:40.033710    8234 notify.go:220] Checking for updates...
	I0816 05:29:40.040619    8234 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:29:40.043572    8234 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 05:29:40.046595    8234 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:29:40.049557    8234 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	I0816 05:29:40.052596    8234 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0816 05:29:40.055890    8234 config.go:182] Loaded profile config "multinode-569000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:29:40.055939    8234 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:29:40.059555    8234 out.go:177] * Using the qemu2 driver based on user configuration
	I0816 05:29:40.066611    8234 start.go:297] selected driver: qemu2
	I0816 05:29:40.066617    8234 start.go:901] validating driver "qemu2" against <nil>
	I0816 05:29:40.066622    8234 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:29:40.068788    8234 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 05:29:40.070422    8234 out.go:177] * Automatically selected the socket_vmnet network
	I0816 05:29:40.073617    8234 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0816 05:29:40.073629    8234 cni.go:84] Creating CNI manager for ""
	I0816 05:29:40.073636    8234 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 05:29:40.073639    8234 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 05:29:40.073663    8234 start.go:340] cluster config:
	{Name:force-systemd-env-384000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-384000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:29:40.076977    8234 iso.go:125] acquiring lock: {Name:mkee7fdae783c25a15c40888f5bdc01a171155d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:29:40.083541    8234 out.go:177] * Starting "force-systemd-env-384000" primary control-plane node in "force-systemd-env-384000" cluster
	I0816 05:29:40.087565    8234 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:29:40.087578    8234 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 05:29:40.087584    8234 cache.go:56] Caching tarball of preloaded images
	I0816 05:29:40.087643    8234 preload.go:172] Found /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 05:29:40.087648    8234 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 05:29:40.087707    8234 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/force-systemd-env-384000/config.json ...
	I0816 05:29:40.087717    8234 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/force-systemd-env-384000/config.json: {Name:mkc44f542a7ab2a69e8c9fcd6ed849015a7bb0d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:29:40.087972    8234 start.go:360] acquireMachinesLock for force-systemd-env-384000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:29:40.088002    8234 start.go:364] duration metric: took 24.459µs to acquireMachinesLock for "force-systemd-env-384000"
	I0816 05:29:40.088014    8234 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-384000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-384000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:29:40.088041    8234 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:29:40.096560    8234 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0816 05:29:40.111962    8234 start.go:159] libmachine.API.Create for "force-systemd-env-384000" (driver="qemu2")
	I0816 05:29:40.111995    8234 client.go:168] LocalClient.Create starting
	I0816 05:29:40.112052    8234 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:29:40.112083    8234 main.go:141] libmachine: Decoding PEM data...
	I0816 05:29:40.112091    8234 main.go:141] libmachine: Parsing certificate...
	I0816 05:29:40.112131    8234 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:29:40.112153    8234 main.go:141] libmachine: Decoding PEM data...
	I0816 05:29:40.112161    8234 main.go:141] libmachine: Parsing certificate...
	I0816 05:29:40.112497    8234 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:29:40.258657    8234 main.go:141] libmachine: Creating SSH key...
	I0816 05:29:40.352316    8234 main.go:141] libmachine: Creating Disk image...
	I0816 05:29:40.352327    8234 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:29:40.352579    8234 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/force-systemd-env-384000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/force-systemd-env-384000/disk.qcow2
	I0816 05:29:40.362015    8234 main.go:141] libmachine: STDOUT: 
	I0816 05:29:40.362041    8234 main.go:141] libmachine: STDERR: 
	I0816 05:29:40.362090    8234 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/force-systemd-env-384000/disk.qcow2 +20000M
	I0816 05:29:40.370128    8234 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:29:40.370148    8234 main.go:141] libmachine: STDERR: 
	I0816 05:29:40.370167    8234 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/force-systemd-env-384000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/force-systemd-env-384000/disk.qcow2
	I0816 05:29:40.370172    8234 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:29:40.370187    8234 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:29:40.370218    8234 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/force-systemd-env-384000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/force-systemd-env-384000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/force-systemd-env-384000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:9b:ce:02:03:4e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/force-systemd-env-384000/disk.qcow2
	I0816 05:29:40.371796    8234 main.go:141] libmachine: STDOUT: 
	I0816 05:29:40.371815    8234 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:29:40.371834    8234 client.go:171] duration metric: took 259.838625ms to LocalClient.Create
	I0816 05:29:42.374021    8234 start.go:128] duration metric: took 2.285986541s to createHost
	I0816 05:29:42.374113    8234 start.go:83] releasing machines lock for "force-systemd-env-384000", held for 2.286139375s
	W0816 05:29:42.374177    8234 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:29:42.381611    8234 out.go:177] * Deleting "force-systemd-env-384000" in qemu2 ...
	W0816 05:29:42.409042    8234 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:29:42.409061    8234 start.go:729] Will try again in 5 seconds ...
	I0816 05:29:47.411138    8234 start.go:360] acquireMachinesLock for force-systemd-env-384000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:29:47.411696    8234 start.go:364] duration metric: took 446.709µs to acquireMachinesLock for "force-systemd-env-384000"
	I0816 05:29:47.411890    8234 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-384000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-384000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:29:47.412169    8234 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:29:47.419698    8234 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0816 05:29:47.467785    8234 start.go:159] libmachine.API.Create for "force-systemd-env-384000" (driver="qemu2")
	I0816 05:29:47.467832    8234 client.go:168] LocalClient.Create starting
	I0816 05:29:47.467973    8234 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:29:47.468040    8234 main.go:141] libmachine: Decoding PEM data...
	I0816 05:29:47.468056    8234 main.go:141] libmachine: Parsing certificate...
	I0816 05:29:47.468110    8234 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:29:47.468154    8234 main.go:141] libmachine: Decoding PEM data...
	I0816 05:29:47.468171    8234 main.go:141] libmachine: Parsing certificate...
	I0816 05:29:47.468973    8234 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:29:47.636737    8234 main.go:141] libmachine: Creating SSH key...
	I0816 05:29:47.745677    8234 main.go:141] libmachine: Creating Disk image...
	I0816 05:29:47.745683    8234 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:29:47.745892    8234 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/force-systemd-env-384000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/force-systemd-env-384000/disk.qcow2
	I0816 05:29:47.755026    8234 main.go:141] libmachine: STDOUT: 
	I0816 05:29:47.755043    8234 main.go:141] libmachine: STDERR: 
	I0816 05:29:47.755108    8234 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/force-systemd-env-384000/disk.qcow2 +20000M
	I0816 05:29:47.762971    8234 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:29:47.762985    8234 main.go:141] libmachine: STDERR: 
	I0816 05:29:47.763011    8234 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/force-systemd-env-384000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/force-systemd-env-384000/disk.qcow2
	I0816 05:29:47.763015    8234 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:29:47.763030    8234 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:29:47.763054    8234 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/force-systemd-env-384000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/force-systemd-env-384000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/force-systemd-env-384000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:13:ce:81:a4:db -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/force-systemd-env-384000/disk.qcow2
	I0816 05:29:47.764672    8234 main.go:141] libmachine: STDOUT: 
	I0816 05:29:47.764697    8234 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:29:47.764712    8234 client.go:171] duration metric: took 296.878083ms to LocalClient.Create
	I0816 05:29:49.766852    8234 start.go:128] duration metric: took 2.354660959s to createHost
	I0816 05:29:49.766919    8234 start.go:83] releasing machines lock for "force-systemd-env-384000", held for 2.355209917s
	W0816 05:29:49.767313    8234 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-384000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-384000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:29:49.778891    8234 out.go:201] 
	W0816 05:29:49.782840    8234 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:29:49.782867    8234 out.go:270] * 
	* 
	W0816 05:29:49.785730    8234 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:29:49.794825    8234 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-384000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-384000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-384000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (75.016709ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-384000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-384000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-384000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-08-16 05:29:49.887225 -0700 PDT m=+627.373300418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-384000 -n force-systemd-env-384000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-384000 -n force-systemd-env-384000: exit status 7 (34.249292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-384000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-384000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-384000
--- FAIL: TestForceSystemdEnv (10.02s)

                                                
                                    
x
+
TestErrorSpam/setup (9.82s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-943000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-943000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000 --driver=qemu2 : exit status 80 (9.821441625s)

                                                
                                                
-- stdout --
	* [nospam-943000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-943000" primary control-plane node in "nospam-943000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-943000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-943000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-943000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-943000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-943000] minikube v1.33.1 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19423
- KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-943000" primary control-plane node in "nospam-943000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-943000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-943000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.82s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.98s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-894000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-894000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.910955375s)

                                                
                                                
-- stdout --
	* [functional-894000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-894000" primary control-plane node in "functional-894000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-894000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:50986 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:50986 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:50986 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-894000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2236: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-894000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2241: start stdout=* [functional-894000] minikube v1.33.1 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19423
- KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-894000" primary control-plane node in "functional-894000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-894000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2246: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:50986 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:50986 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:50986 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-894000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-894000 -n functional-894000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-894000 -n functional-894000: exit status 7 (68.007875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-894000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.98s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.27s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-894000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-894000 --alsologtostderr -v=8: exit status 80 (5.193789667s)

                                                
                                                
-- stdout --
	* [functional-894000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-894000" primary control-plane node in "functional-894000" cluster
	* Restarting existing qemu2 VM for "functional-894000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-894000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:20:24.792394    6971 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:20:24.792529    6971 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:20:24.792532    6971 out.go:358] Setting ErrFile to fd 2...
	I0816 05:20:24.792535    6971 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:20:24.792661    6971 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:20:24.793677    6971 out.go:352] Setting JSON to false
	I0816 05:20:24.809943    6971 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4793,"bootTime":1723806031,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0816 05:20:24.810009    6971 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:20:24.815387    6971 out.go:177] * [functional-894000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 05:20:24.823457    6971 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:20:24.823486    6971 notify.go:220] Checking for updates...
	I0816 05:20:24.830447    6971 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:20:24.834354    6971 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 05:20:24.838396    6971 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:20:24.844396    6971 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	I0816 05:20:24.847392    6971 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 05:20:24.850765    6971 config.go:182] Loaded profile config "functional-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:20:24.850821    6971 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:20:24.855402    6971 out.go:177] * Using the qemu2 driver based on existing profile
	I0816 05:20:24.863417    6971 start.go:297] selected driver: qemu2
	I0816 05:20:24.863426    6971 start.go:901] validating driver "qemu2" against &{Name:functional-894000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-894000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:20:24.863512    6971 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:20:24.866047    6971 cni.go:84] Creating CNI manager for ""
	I0816 05:20:24.866064    6971 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 05:20:24.866110    6971 start.go:340] cluster config:
	{Name:functional-894000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-894000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:20:24.869825    6971 iso.go:125] acquiring lock: {Name:mkee7fdae783c25a15c40888f5bdc01a171155d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:20:24.877433    6971 out.go:177] * Starting "functional-894000" primary control-plane node in "functional-894000" cluster
	I0816 05:20:24.880425    6971 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:20:24.880439    6971 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 05:20:24.880446    6971 cache.go:56] Caching tarball of preloaded images
	I0816 05:20:24.880502    6971 preload.go:172] Found /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 05:20:24.880508    6971 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 05:20:24.880564    6971 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/functional-894000/config.json ...
	I0816 05:20:24.881012    6971 start.go:360] acquireMachinesLock for functional-894000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:20:24.881044    6971 start.go:364] duration metric: took 25.541µs to acquireMachinesLock for "functional-894000"
	I0816 05:20:24.881055    6971 start.go:96] Skipping create...Using existing machine configuration
	I0816 05:20:24.881060    6971 fix.go:54] fixHost starting: 
	I0816 05:20:24.881193    6971 fix.go:112] recreateIfNeeded on functional-894000: state=Stopped err=<nil>
	W0816 05:20:24.881202    6971 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 05:20:24.889353    6971 out.go:177] * Restarting existing qemu2 VM for "functional-894000" ...
	I0816 05:20:24.893345    6971 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:20:24.893382    6971 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/functional-894000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/functional-894000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/functional-894000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:86:7c:ef:f2:fe -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/functional-894000/disk.qcow2
	I0816 05:20:24.895655    6971 main.go:141] libmachine: STDOUT: 
	I0816 05:20:24.895677    6971 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:20:24.895706    6971 fix.go:56] duration metric: took 14.647917ms for fixHost
	I0816 05:20:24.895711    6971 start.go:83] releasing machines lock for "functional-894000", held for 14.662709ms
	W0816 05:20:24.895725    6971 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:20:24.895757    6971 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:20:24.895763    6971 start.go:729] Will try again in 5 seconds ...
	I0816 05:20:29.897926    6971 start.go:360] acquireMachinesLock for functional-894000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:20:29.898299    6971 start.go:364] duration metric: took 263.958µs to acquireMachinesLock for "functional-894000"
	I0816 05:20:29.898459    6971 start.go:96] Skipping create...Using existing machine configuration
	I0816 05:20:29.898480    6971 fix.go:54] fixHost starting: 
	I0816 05:20:29.899222    6971 fix.go:112] recreateIfNeeded on functional-894000: state=Stopped err=<nil>
	W0816 05:20:29.899249    6971 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 05:20:29.903636    6971 out.go:177] * Restarting existing qemu2 VM for "functional-894000" ...
	I0816 05:20:29.911585    6971 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:20:29.911808    6971 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/functional-894000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/functional-894000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/functional-894000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:86:7c:ef:f2:fe -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/functional-894000/disk.qcow2
	I0816 05:20:29.920667    6971 main.go:141] libmachine: STDOUT: 
	I0816 05:20:29.920731    6971 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:20:29.920792    6971 fix.go:56] duration metric: took 22.311417ms for fixHost
	I0816 05:20:29.920806    6971 start.go:83] releasing machines lock for "functional-894000", held for 22.482292ms
	W0816 05:20:29.920958    6971 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-894000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-894000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:20:29.928590    6971 out.go:201] 
	W0816 05:20:29.932632    6971 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:20:29.932675    6971 out.go:270] * 
	* 
	W0816 05:20:29.935353    6971 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:20:29.943612    6971 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:661: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-894000 --alsologtostderr -v=8": exit status 80
functional_test.go:663: soft start took 5.195562375s for "functional-894000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-894000 -n functional-894000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-894000 -n functional-894000: exit status 7 (69.184708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-894000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.27s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
functional_test.go:681: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (31.046458ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:683: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:687: expected current-context = "functional-894000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-894000 -n functional-894000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-894000 -n functional-894000: exit status 7 (30.064333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-894000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-894000 get po -A
functional_test.go:696: (dbg) Non-zero exit: kubectl --context functional-894000 get po -A: exit status 1 (26.472625ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-894000

                                                
                                                
** /stderr **
functional_test.go:698: failed to get kubectl pods: args "kubectl --context functional-894000 get po -A" : exit status 1
functional_test.go:702: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-894000\n"*: args "kubectl --context functional-894000 get po -A"
functional_test.go:705: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-894000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-894000 -n functional-894000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-894000 -n functional-894000: exit status 7 (29.675375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-894000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 ssh sudo crictl images
functional_test.go:1124: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 ssh sudo crictl images: exit status 83 (49.793792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test.go:1126: failed to get images by "out/minikube-darwin-arm64 -p functional-894000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1130: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1147: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (39.8275ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test.go:1150: failed to manually delete image "out/minikube-darwin-arm64 -p functional-894000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (39.99925ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1163: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (40.822875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test.go:1165: expected "out/minikube-darwin-arm64 -p functional-894000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.77s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 kubectl -- --context functional-894000 get pods
functional_test.go:716: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 kubectl -- --context functional-894000 get pods: exit status 1 (736.183208ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-894000
	* no server found for cluster "functional-894000"

                                                
                                                
** /stderr **
functional_test.go:719: failed to get pods. args "out/minikube-darwin-arm64 -p functional-894000 kubectl -- --context functional-894000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-894000 -n functional-894000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-894000 -n functional-894000: exit status 7 (32.475541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-894000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.77s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.06s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-894000 get pods
functional_test.go:741: (dbg) Non-zero exit: out/kubectl --context functional-894000 get pods: exit status 1 (1.02873525s)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-894000
	* no server found for cluster "functional-894000"

                                                
                                                
** /stderr **
functional_test.go:744: failed to run kubectl directly. args "out/kubectl --context functional-894000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-894000 -n functional-894000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-894000 -n functional-894000: exit status 7 (29.119583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-894000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.06s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.25s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-894000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-894000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.181973167s)

                                                
                                                
-- stdout --
	* [functional-894000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-894000" primary control-plane node in "functional-894000" cluster
	* Restarting existing qemu2 VM for "functional-894000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-894000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-894000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:759: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-894000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:761: restart took 5.182537958s for "functional-894000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-894000 -n functional-894000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-894000 -n functional-894000: exit status 7 (70.411542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-894000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.25s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-894000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:810: (dbg) Non-zero exit: kubectl --context functional-894000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (29.639416ms)

                                                
                                                
** stderr ** 
	error: context "functional-894000" does not exist

                                                
                                                
** /stderr **
functional_test.go:812: failed to get components. args "kubectl --context functional-894000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-894000 -n functional-894000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-894000 -n functional-894000: exit status 7 (30.900042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-894000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 logs
functional_test.go:1236: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 logs: exit status 83 (75.692583ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-222000 | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT |                     |
	|         | -p download-only-222000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT | 16 Aug 24 05:19 PDT |
	| delete  | -p download-only-222000                                                  | download-only-222000 | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT | 16 Aug 24 05:19 PDT |
	| start   | -o=json --download-only                                                  | download-only-783000 | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT |                     |
	|         | -p download-only-783000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT | 16 Aug 24 05:19 PDT |
	| delete  | -p download-only-783000                                                  | download-only-783000 | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT | 16 Aug 24 05:19 PDT |
	| delete  | -p download-only-222000                                                  | download-only-222000 | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT | 16 Aug 24 05:19 PDT |
	| delete  | -p download-only-783000                                                  | download-only-783000 | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT | 16 Aug 24 05:19 PDT |
	| start   | --download-only -p                                                       | binary-mirror-393000 | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT |                     |
	|         | binary-mirror-393000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:50949                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-393000                                                  | binary-mirror-393000 | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT | 16 Aug 24 05:19 PDT |
	| addons  | enable dashboard -p                                                      | addons-851000        | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT |                     |
	|         | addons-851000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-851000        | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT |                     |
	|         | addons-851000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-851000 --wait=true                                             | addons-851000        | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-851000                                                         | addons-851000        | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT | 16 Aug 24 05:19 PDT |
	| start   | -p nospam-943000 -n=1 --memory=2250 --wait=false                         | nospam-943000        | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-943000 --log_dir                                                  | nospam-943000        | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-943000 --log_dir                                                  | nospam-943000        | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-943000 --log_dir                                                  | nospam-943000        | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-943000 --log_dir                                                  | nospam-943000        | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-943000 --log_dir                                                  | nospam-943000        | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-943000 --log_dir                                                  | nospam-943000        | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-943000 --log_dir                                                  | nospam-943000        | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-943000 --log_dir                                                  | nospam-943000        | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-943000 --log_dir                                                  | nospam-943000        | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-943000 --log_dir                                                  | nospam-943000        | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT | 16 Aug 24 05:20 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-943000 --log_dir                                                  | nospam-943000        | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT | 16 Aug 24 05:20 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-943000 --log_dir                                                  | nospam-943000        | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT | 16 Aug 24 05:20 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-943000                                                         | nospam-943000        | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT | 16 Aug 24 05:20 PDT |
	| start   | -p functional-894000                                                     | functional-894000    | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-894000                                                     | functional-894000    | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-894000 cache add                                              | functional-894000    | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT | 16 Aug 24 05:20 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-894000 cache add                                              | functional-894000    | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT | 16 Aug 24 05:20 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-894000 cache add                                              | functional-894000    | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT | 16 Aug 24 05:20 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-894000 cache add                                              | functional-894000    | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT | 16 Aug 24 05:20 PDT |
	|         | minikube-local-cache-test:functional-894000                              |                      |         |         |                     |                     |
	| cache   | functional-894000 cache delete                                           | functional-894000    | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT | 16 Aug 24 05:20 PDT |
	|         | minikube-local-cache-test:functional-894000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT | 16 Aug 24 05:20 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT | 16 Aug 24 05:20 PDT |
	| ssh     | functional-894000 ssh sudo                                               | functional-894000    | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-894000                                                        | functional-894000    | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-894000 ssh                                                    | functional-894000    | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-894000 cache reload                                           | functional-894000    | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT | 16 Aug 24 05:20 PDT |
	| ssh     | functional-894000 ssh                                                    | functional-894000    | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT | 16 Aug 24 05:20 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT | 16 Aug 24 05:20 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-894000 kubectl --                                             | functional-894000    | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
	|         | --context functional-894000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-894000                                                     | functional-894000    | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 05:20:35
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 05:20:35.265424    7047 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:20:35.265539    7047 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:20:35.265541    7047 out.go:358] Setting ErrFile to fd 2...
	I0816 05:20:35.265543    7047 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:20:35.265647    7047 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:20:35.266842    7047 out.go:352] Setting JSON to false
	I0816 05:20:35.282554    7047 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4804,"bootTime":1723806031,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0816 05:20:35.282618    7047 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:20:35.287120    7047 out.go:177] * [functional-894000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 05:20:35.297158    7047 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:20:35.297194    7047 notify.go:220] Checking for updates...
	I0816 05:20:35.305158    7047 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:20:35.309129    7047 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 05:20:35.312170    7047 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:20:35.315130    7047 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	I0816 05:20:35.318127    7047 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 05:20:35.321442    7047 config.go:182] Loaded profile config "functional-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:20:35.321500    7047 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:20:35.326163    7047 out.go:177] * Using the qemu2 driver based on existing profile
	I0816 05:20:35.333133    7047 start.go:297] selected driver: qemu2
	I0816 05:20:35.333139    7047 start.go:901] validating driver "qemu2" against &{Name:functional-894000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-894000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:20:35.333210    7047 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:20:35.335499    7047 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 05:20:35.335523    7047 cni.go:84] Creating CNI manager for ""
	I0816 05:20:35.335533    7047 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 05:20:35.335584    7047 start.go:340] cluster config:
	{Name:functional-894000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-894000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:20:35.339099    7047 iso.go:125] acquiring lock: {Name:mkee7fdae783c25a15c40888f5bdc01a171155d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:20:35.347136    7047 out.go:177] * Starting "functional-894000" primary control-plane node in "functional-894000" cluster
	I0816 05:20:35.351181    7047 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:20:35.351196    7047 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 05:20:35.351205    7047 cache.go:56] Caching tarball of preloaded images
	I0816 05:20:35.351273    7047 preload.go:172] Found /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 05:20:35.351278    7047 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 05:20:35.351342    7047 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/functional-894000/config.json ...
	I0816 05:20:35.351780    7047 start.go:360] acquireMachinesLock for functional-894000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:20:35.351814    7047 start.go:364] duration metric: took 29.875µs to acquireMachinesLock for "functional-894000"
	I0816 05:20:35.351823    7047 start.go:96] Skipping create...Using existing machine configuration
	I0816 05:20:35.351826    7047 fix.go:54] fixHost starting: 
	I0816 05:20:35.351950    7047 fix.go:112] recreateIfNeeded on functional-894000: state=Stopped err=<nil>
	W0816 05:20:35.351957    7047 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 05:20:35.360120    7047 out.go:177] * Restarting existing qemu2 VM for "functional-894000" ...
	I0816 05:20:35.364148    7047 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:20:35.364190    7047 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/functional-894000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/functional-894000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/functional-894000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:86:7c:ef:f2:fe -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/functional-894000/disk.qcow2
	I0816 05:20:35.366229    7047 main.go:141] libmachine: STDOUT: 
	I0816 05:20:35.366243    7047 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:20:35.366272    7047 fix.go:56] duration metric: took 14.44625ms for fixHost
	I0816 05:20:35.366275    7047 start.go:83] releasing machines lock for "functional-894000", held for 14.457667ms
	W0816 05:20:35.366281    7047 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:20:35.366314    7047 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:20:35.366319    7047 start.go:729] Will try again in 5 seconds ...
	I0816 05:20:40.368481    7047 start.go:360] acquireMachinesLock for functional-894000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:20:40.368833    7047 start.go:364] duration metric: took 314.375µs to acquireMachinesLock for "functional-894000"
	I0816 05:20:40.368952    7047 start.go:96] Skipping create...Using existing machine configuration
	I0816 05:20:40.368965    7047 fix.go:54] fixHost starting: 
	I0816 05:20:40.369642    7047 fix.go:112] recreateIfNeeded on functional-894000: state=Stopped err=<nil>
	W0816 05:20:40.369658    7047 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 05:20:40.373239    7047 out.go:177] * Restarting existing qemu2 VM for "functional-894000" ...
	I0816 05:20:40.377976    7047 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:20:40.378164    7047 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/functional-894000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/functional-894000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/functional-894000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:86:7c:ef:f2:fe -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/functional-894000/disk.qcow2
	I0816 05:20:40.387152    7047 main.go:141] libmachine: STDOUT: 
	I0816 05:20:40.387225    7047 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:20:40.387292    7047 fix.go:56] duration metric: took 18.332375ms for fixHost
	I0816 05:20:40.387300    7047 start.go:83] releasing machines lock for "functional-894000", held for 18.455334ms
	W0816 05:20:40.387459    7047 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-894000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:20:40.395035    7047 out.go:201] 
	W0816 05:20:40.399071    7047 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:20:40.399096    7047 out.go:270] * 
	W0816 05:20:40.401576    7047 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:20:40.409016    7047 out.go:201] 
	
	
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test.go:1238: out/minikube-darwin-arm64 -p functional-894000 logs failed: exit status 83
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-222000 | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT |                     |
|         | -p download-only-222000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT | 16 Aug 24 05:19 PDT |
| delete  | -p download-only-222000                                                  | download-only-222000 | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT | 16 Aug 24 05:19 PDT |
| start   | -o=json --download-only                                                  | download-only-783000 | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT |                     |
|         | -p download-only-783000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT | 16 Aug 24 05:19 PDT |
| delete  | -p download-only-783000                                                  | download-only-783000 | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT | 16 Aug 24 05:19 PDT |
| delete  | -p download-only-222000                                                  | download-only-222000 | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT | 16 Aug 24 05:19 PDT |
| delete  | -p download-only-783000                                                  | download-only-783000 | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT | 16 Aug 24 05:19 PDT |
| start   | --download-only -p                                                       | binary-mirror-393000 | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT |                     |
|         | binary-mirror-393000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:50949                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-393000                                                  | binary-mirror-393000 | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT | 16 Aug 24 05:19 PDT |
| addons  | enable dashboard -p                                                      | addons-851000        | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT |                     |
|         | addons-851000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-851000        | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT |                     |
|         | addons-851000                                                            |                      |         |         |                     |                     |
| start   | -p addons-851000 --wait=true                                             | addons-851000        | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-851000                                                         | addons-851000        | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT | 16 Aug 24 05:19 PDT |
| start   | -p nospam-943000 -n=1 --memory=2250 --wait=false                         | nospam-943000        | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-943000 --log_dir                                                  | nospam-943000        | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-943000 --log_dir                                                  | nospam-943000        | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-943000 --log_dir                                                  | nospam-943000        | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-943000 --log_dir                                                  | nospam-943000        | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-943000 --log_dir                                                  | nospam-943000        | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-943000 --log_dir                                                  | nospam-943000        | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-943000 --log_dir                                                  | nospam-943000        | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-943000 --log_dir                                                  | nospam-943000        | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-943000 --log_dir                                                  | nospam-943000        | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-943000 --log_dir                                                  | nospam-943000        | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT | 16 Aug 24 05:20 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-943000 --log_dir                                                  | nospam-943000        | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT | 16 Aug 24 05:20 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-943000 --log_dir                                                  | nospam-943000        | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT | 16 Aug 24 05:20 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-943000                                                         | nospam-943000        | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT | 16 Aug 24 05:20 PDT |
| start   | -p functional-894000                                                     | functional-894000    | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-894000                                                     | functional-894000    | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-894000 cache add                                              | functional-894000    | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT | 16 Aug 24 05:20 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-894000 cache add                                              | functional-894000    | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT | 16 Aug 24 05:20 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-894000 cache add                                              | functional-894000    | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT | 16 Aug 24 05:20 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-894000 cache add                                              | functional-894000    | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT | 16 Aug 24 05:20 PDT |
|         | minikube-local-cache-test:functional-894000                              |                      |         |         |                     |                     |
| cache   | functional-894000 cache delete                                           | functional-894000    | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT | 16 Aug 24 05:20 PDT |
|         | minikube-local-cache-test:functional-894000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT | 16 Aug 24 05:20 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT | 16 Aug 24 05:20 PDT |
| ssh     | functional-894000 ssh sudo                                               | functional-894000    | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-894000                                                        | functional-894000    | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-894000 ssh                                                    | functional-894000    | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-894000 cache reload                                           | functional-894000    | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT | 16 Aug 24 05:20 PDT |
| ssh     | functional-894000 ssh                                                    | functional-894000    | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT | 16 Aug 24 05:20 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT | 16 Aug 24 05:20 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-894000 kubectl --                                             | functional-894000    | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
|         | --context functional-894000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-894000                                                     | functional-894000    | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/08/16 05:20:35
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.5 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0816 05:20:35.265424    7047 out.go:345] Setting OutFile to fd 1 ...
I0816 05:20:35.265539    7047 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 05:20:35.265541    7047 out.go:358] Setting ErrFile to fd 2...
I0816 05:20:35.265543    7047 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 05:20:35.265647    7047 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
I0816 05:20:35.266842    7047 out.go:352] Setting JSON to false
I0816 05:20:35.282554    7047 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4804,"bootTime":1723806031,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0816 05:20:35.282618    7047 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0816 05:20:35.287120    7047 out.go:177] * [functional-894000] minikube v1.33.1 on Darwin 14.5 (arm64)
I0816 05:20:35.297158    7047 out.go:177]   - MINIKUBE_LOCATION=19423
I0816 05:20:35.297194    7047 notify.go:220] Checking for updates...
I0816 05:20:35.305158    7047 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
I0816 05:20:35.309129    7047 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0816 05:20:35.312170    7047 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0816 05:20:35.315130    7047 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
I0816 05:20:35.318127    7047 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0816 05:20:35.321442    7047 config.go:182] Loaded profile config "functional-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0816 05:20:35.321500    7047 driver.go:394] Setting default libvirt URI to qemu:///system
I0816 05:20:35.326163    7047 out.go:177] * Using the qemu2 driver based on existing profile
I0816 05:20:35.333133    7047 start.go:297] selected driver: qemu2
I0816 05:20:35.333139    7047 start.go:901] validating driver "qemu2" against &{Name:functional-894000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:functional-894000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0816 05:20:35.333210    7047 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0816 05:20:35.335499    7047 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0816 05:20:35.335523    7047 cni.go:84] Creating CNI manager for ""
I0816 05:20:35.335533    7047 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0816 05:20:35.335584    7047 start.go:340] cluster config:
{Name:functional-894000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-894000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0816 05:20:35.339099    7047 iso.go:125] acquiring lock: {Name:mkee7fdae783c25a15c40888f5bdc01a171155d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0816 05:20:35.347136    7047 out.go:177] * Starting "functional-894000" primary control-plane node in "functional-894000" cluster
I0816 05:20:35.351181    7047 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
I0816 05:20:35.351196    7047 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
I0816 05:20:35.351205    7047 cache.go:56] Caching tarball of preloaded images
I0816 05:20:35.351273    7047 preload.go:172] Found /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0816 05:20:35.351278    7047 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
I0816 05:20:35.351342    7047 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/functional-894000/config.json ...
I0816 05:20:35.351780    7047 start.go:360] acquireMachinesLock for functional-894000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0816 05:20:35.351814    7047 start.go:364] duration metric: took 29.875µs to acquireMachinesLock for "functional-894000"
I0816 05:20:35.351823    7047 start.go:96] Skipping create...Using existing machine configuration
I0816 05:20:35.351826    7047 fix.go:54] fixHost starting: 
I0816 05:20:35.351950    7047 fix.go:112] recreateIfNeeded on functional-894000: state=Stopped err=<nil>
W0816 05:20:35.351957    7047 fix.go:138] unexpected machine state, will restart: <nil>
I0816 05:20:35.360120    7047 out.go:177] * Restarting existing qemu2 VM for "functional-894000" ...
I0816 05:20:35.364148    7047 qemu.go:418] Using hvf for hardware acceleration
I0816 05:20:35.364190    7047 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/functional-894000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/functional-894000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/functional-894000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:86:7c:ef:f2:fe -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/functional-894000/disk.qcow2
I0816 05:20:35.366229    7047 main.go:141] libmachine: STDOUT: 
I0816 05:20:35.366243    7047 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0816 05:20:35.366272    7047 fix.go:56] duration metric: took 14.44625ms for fixHost
I0816 05:20:35.366275    7047 start.go:83] releasing machines lock for "functional-894000", held for 14.457667ms
W0816 05:20:35.366281    7047 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0816 05:20:35.366314    7047 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0816 05:20:35.366319    7047 start.go:729] Will try again in 5 seconds ...
I0816 05:20:40.368481    7047 start.go:360] acquireMachinesLock for functional-894000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0816 05:20:40.368833    7047 start.go:364] duration metric: took 314.375µs to acquireMachinesLock for "functional-894000"
I0816 05:20:40.368952    7047 start.go:96] Skipping create...Using existing machine configuration
I0816 05:20:40.368965    7047 fix.go:54] fixHost starting: 
I0816 05:20:40.369642    7047 fix.go:112] recreateIfNeeded on functional-894000: state=Stopped err=<nil>
W0816 05:20:40.369658    7047 fix.go:138] unexpected machine state, will restart: <nil>
I0816 05:20:40.373239    7047 out.go:177] * Restarting existing qemu2 VM for "functional-894000" ...
I0816 05:20:40.377976    7047 qemu.go:418] Using hvf for hardware acceleration
I0816 05:20:40.378164    7047 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/functional-894000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/functional-894000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/functional-894000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:86:7c:ef:f2:fe -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/functional-894000/disk.qcow2
I0816 05:20:40.387152    7047 main.go:141] libmachine: STDOUT: 
I0816 05:20:40.387225    7047 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0816 05:20:40.387292    7047 fix.go:56] duration metric: took 18.332375ms for fixHost
I0816 05:20:40.387300    7047 start.go:83] releasing machines lock for "functional-894000", held for 18.455334ms
W0816 05:20:40.387459    7047 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-894000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0816 05:20:40.395035    7047 out.go:201] 
W0816 05:20:40.399071    7047 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0816 05:20:40.399096    7047 out.go:270] * 
W0816 05:20:40.401576    7047 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0816 05:20:40.409016    7047 out.go:201] 

                                                
                                                

                                                
                                                
* The control-plane node functional-894000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-894000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd2685898726/001/logs.txt
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-222000 | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT |                     |
|         | -p download-only-222000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT | 16 Aug 24 05:19 PDT |
| delete  | -p download-only-222000                                                  | download-only-222000 | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT | 16 Aug 24 05:19 PDT |
| start   | -o=json --download-only                                                  | download-only-783000 | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT |                     |
|         | -p download-only-783000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT | 16 Aug 24 05:19 PDT |
| delete  | -p download-only-783000                                                  | download-only-783000 | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT | 16 Aug 24 05:19 PDT |
| delete  | -p download-only-222000                                                  | download-only-222000 | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT | 16 Aug 24 05:19 PDT |
| delete  | -p download-only-783000                                                  | download-only-783000 | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT | 16 Aug 24 05:19 PDT |
| start   | --download-only -p                                                       | binary-mirror-393000 | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT |                     |
|         | binary-mirror-393000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:50949                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-393000                                                  | binary-mirror-393000 | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT | 16 Aug 24 05:19 PDT |
| addons  | enable dashboard -p                                                      | addons-851000        | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT |                     |
|         | addons-851000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-851000        | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT |                     |
|         | addons-851000                                                            |                      |         |         |                     |                     |
| start   | -p addons-851000 --wait=true                                             | addons-851000        | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-851000                                                         | addons-851000        | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT | 16 Aug 24 05:19 PDT |
| start   | -p nospam-943000 -n=1 --memory=2250 --wait=false                         | nospam-943000        | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-943000 --log_dir                                                  | nospam-943000        | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-943000 --log_dir                                                  | nospam-943000        | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-943000 --log_dir                                                  | nospam-943000        | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-943000 --log_dir                                                  | nospam-943000        | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-943000 --log_dir                                                  | nospam-943000        | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-943000 --log_dir                                                  | nospam-943000        | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-943000 --log_dir                                                  | nospam-943000        | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-943000 --log_dir                                                  | nospam-943000        | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-943000 --log_dir                                                  | nospam-943000        | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-943000 --log_dir                                                  | nospam-943000        | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT | 16 Aug 24 05:20 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-943000 --log_dir                                                  | nospam-943000        | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT | 16 Aug 24 05:20 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-943000 --log_dir                                                  | nospam-943000        | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT | 16 Aug 24 05:20 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-943000                                                         | nospam-943000        | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT | 16 Aug 24 05:20 PDT |
| start   | -p functional-894000                                                     | functional-894000    | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-894000                                                     | functional-894000    | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-894000 cache add                                              | functional-894000    | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT | 16 Aug 24 05:20 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-894000 cache add                                              | functional-894000    | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT | 16 Aug 24 05:20 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-894000 cache add                                              | functional-894000    | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT | 16 Aug 24 05:20 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-894000 cache add                                              | functional-894000    | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT | 16 Aug 24 05:20 PDT |
|         | minikube-local-cache-test:functional-894000                              |                      |         |         |                     |                     |
| cache   | functional-894000 cache delete                                           | functional-894000    | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT | 16 Aug 24 05:20 PDT |
|         | minikube-local-cache-test:functional-894000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT | 16 Aug 24 05:20 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT | 16 Aug 24 05:20 PDT |
| ssh     | functional-894000 ssh sudo                                               | functional-894000    | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-894000                                                        | functional-894000    | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-894000 ssh                                                    | functional-894000    | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-894000 cache reload                                           | functional-894000    | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT | 16 Aug 24 05:20 PDT |
| ssh     | functional-894000 ssh                                                    | functional-894000    | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT | 16 Aug 24 05:20 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT | 16 Aug 24 05:20 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-894000 kubectl --                                             | functional-894000    | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
|         | --context functional-894000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-894000                                                     | functional-894000    | jenkins | v1.33.1 | 16 Aug 24 05:20 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/08/16 05:20:35
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.5 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0816 05:20:35.265424    7047 out.go:345] Setting OutFile to fd 1 ...
I0816 05:20:35.265539    7047 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 05:20:35.265541    7047 out.go:358] Setting ErrFile to fd 2...
I0816 05:20:35.265543    7047 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 05:20:35.265647    7047 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
I0816 05:20:35.266842    7047 out.go:352] Setting JSON to false
I0816 05:20:35.282554    7047 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4804,"bootTime":1723806031,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0816 05:20:35.282618    7047 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0816 05:20:35.287120    7047 out.go:177] * [functional-894000] minikube v1.33.1 on Darwin 14.5 (arm64)
I0816 05:20:35.297158    7047 out.go:177]   - MINIKUBE_LOCATION=19423
I0816 05:20:35.297194    7047 notify.go:220] Checking for updates...
I0816 05:20:35.305158    7047 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
I0816 05:20:35.309129    7047 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0816 05:20:35.312170    7047 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0816 05:20:35.315130    7047 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
I0816 05:20:35.318127    7047 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0816 05:20:35.321442    7047 config.go:182] Loaded profile config "functional-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0816 05:20:35.321500    7047 driver.go:394] Setting default libvirt URI to qemu:///system
I0816 05:20:35.326163    7047 out.go:177] * Using the qemu2 driver based on existing profile
I0816 05:20:35.333133    7047 start.go:297] selected driver: qemu2
I0816 05:20:35.333139    7047 start.go:901] validating driver "qemu2" against &{Name:functional-894000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:functional-894000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0816 05:20:35.333210    7047 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0816 05:20:35.335499    7047 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0816 05:20:35.335523    7047 cni.go:84] Creating CNI manager for ""
I0816 05:20:35.335533    7047 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0816 05:20:35.335584    7047 start.go:340] cluster config:
{Name:functional-894000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-894000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0816 05:20:35.339099    7047 iso.go:125] acquiring lock: {Name:mkee7fdae783c25a15c40888f5bdc01a171155d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0816 05:20:35.347136    7047 out.go:177] * Starting "functional-894000" primary control-plane node in "functional-894000" cluster
I0816 05:20:35.351181    7047 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
I0816 05:20:35.351196    7047 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
I0816 05:20:35.351205    7047 cache.go:56] Caching tarball of preloaded images
I0816 05:20:35.351273    7047 preload.go:172] Found /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0816 05:20:35.351278    7047 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
I0816 05:20:35.351342    7047 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/functional-894000/config.json ...
I0816 05:20:35.351780    7047 start.go:360] acquireMachinesLock for functional-894000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0816 05:20:35.351814    7047 start.go:364] duration metric: took 29.875µs to acquireMachinesLock for "functional-894000"
I0816 05:20:35.351823    7047 start.go:96] Skipping create...Using existing machine configuration
I0816 05:20:35.351826    7047 fix.go:54] fixHost starting: 
I0816 05:20:35.351950    7047 fix.go:112] recreateIfNeeded on functional-894000: state=Stopped err=<nil>
W0816 05:20:35.351957    7047 fix.go:138] unexpected machine state, will restart: <nil>
I0816 05:20:35.360120    7047 out.go:177] * Restarting existing qemu2 VM for "functional-894000" ...
I0816 05:20:35.364148    7047 qemu.go:418] Using hvf for hardware acceleration
I0816 05:20:35.364190    7047 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/functional-894000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/functional-894000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/functional-894000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:86:7c:ef:f2:fe -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/functional-894000/disk.qcow2
I0816 05:20:35.366229    7047 main.go:141] libmachine: STDOUT: 
I0816 05:20:35.366243    7047 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0816 05:20:35.366272    7047 fix.go:56] duration metric: took 14.44625ms for fixHost
I0816 05:20:35.366275    7047 start.go:83] releasing machines lock for "functional-894000", held for 14.457667ms
W0816 05:20:35.366281    7047 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0816 05:20:35.366314    7047 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0816 05:20:35.366319    7047 start.go:729] Will try again in 5 seconds ...
I0816 05:20:40.368481    7047 start.go:360] acquireMachinesLock for functional-894000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0816 05:20:40.368833    7047 start.go:364] duration metric: took 314.375µs to acquireMachinesLock for "functional-894000"
I0816 05:20:40.368952    7047 start.go:96] Skipping create...Using existing machine configuration
I0816 05:20:40.368965    7047 fix.go:54] fixHost starting: 
I0816 05:20:40.369642    7047 fix.go:112] recreateIfNeeded on functional-894000: state=Stopped err=<nil>
W0816 05:20:40.369658    7047 fix.go:138] unexpected machine state, will restart: <nil>
I0816 05:20:40.373239    7047 out.go:177] * Restarting existing qemu2 VM for "functional-894000" ...
I0816 05:20:40.377976    7047 qemu.go:418] Using hvf for hardware acceleration
I0816 05:20:40.378164    7047 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/functional-894000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/functional-894000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/functional-894000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:86:7c:ef:f2:fe -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/functional-894000/disk.qcow2
I0816 05:20:40.387152    7047 main.go:141] libmachine: STDOUT: 
I0816 05:20:40.387225    7047 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0816 05:20:40.387292    7047 fix.go:56] duration metric: took 18.332375ms for fixHost
I0816 05:20:40.387300    7047 start.go:83] releasing machines lock for "functional-894000", held for 18.455334ms
W0816 05:20:40.387459    7047 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-894000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0816 05:20:40.395035    7047 out.go:201] 
W0816 05:20:40.399071    7047 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0816 05:20:40.399096    7047 out.go:270] * 
W0816 05:20:40.401576    7047 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0816 05:20:40.409016    7047 out.go:201] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-894000 apply -f testdata/invalidsvc.yaml
functional_test.go:2321: (dbg) Non-zero exit: kubectl --context functional-894000 apply -f testdata/invalidsvc.yaml: exit status 1 (27.0735ms)

                                                
                                                
** stderr ** 
	error: context "functional-894000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2323: kubectl --context functional-894000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-894000 --alsologtostderr -v=1]
functional_test.go:918: output didn't produce a URL
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-894000 --alsologtostderr -v=1] ...
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-894000 --alsologtostderr -v=1] stdout:
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-894000 --alsologtostderr -v=1] stderr:
I0816 05:21:21.045274    7349 out.go:345] Setting OutFile to fd 1 ...
I0816 05:21:21.045669    7349 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 05:21:21.045673    7349 out.go:358] Setting ErrFile to fd 2...
I0816 05:21:21.045678    7349 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 05:21:21.045848    7349 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
I0816 05:21:21.046032    7349 mustload.go:65] Loading cluster: functional-894000
I0816 05:21:21.046220    7349 config.go:182] Loaded profile config "functional-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0816 05:21:21.050363    7349 out.go:177] * The control-plane node functional-894000 host is not running: state=Stopped
I0816 05:21:21.054370    7349 out.go:177]   To start a cluster, run: "minikube start -p functional-894000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-894000 -n functional-894000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-894000 -n functional-894000: exit status 7 (42.024834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-894000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 status
functional_test.go:854: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 status: exit status 7 (29.366ms)

                                                
                                                
-- stdout --
	functional-894000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:856: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-894000 status" : exit status 7
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:860: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (29.491958ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:862: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-894000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 status -o json
functional_test.go:872: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 status -o json: exit status 7 (30.279583ms)

                                                
                                                
-- stdout --
	{"Name":"functional-894000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:874: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-894000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-894000 -n functional-894000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-894000 -n functional-894000: exit status 7 (29.396709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-894000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-894000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1627: (dbg) Non-zero exit: kubectl --context functional-894000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.879666ms)

                                                
                                                
** stderr ** 
	error: context "functional-894000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1633: failed to create hello-node deployment with this command "kubectl --context functional-894000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-894000 describe po hello-node-connect
functional_test.go:1602: (dbg) Non-zero exit: kubectl --context functional-894000 describe po hello-node-connect: exit status 1 (26.274958ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-894000

                                                
                                                
** /stderr **
functional_test.go:1604: "kubectl --context functional-894000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1606: hello-node pod describe:
functional_test.go:1608: (dbg) Run:  kubectl --context functional-894000 logs -l app=hello-node-connect
functional_test.go:1608: (dbg) Non-zero exit: kubectl --context functional-894000 logs -l app=hello-node-connect: exit status 1 (25.788167ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-894000

                                                
                                                
** /stderr **
functional_test.go:1610: "kubectl --context functional-894000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1612: hello-node logs:
functional_test.go:1614: (dbg) Run:  kubectl --context functional-894000 describe svc hello-node-connect
functional_test.go:1614: (dbg) Non-zero exit: kubectl --context functional-894000 describe svc hello-node-connect: exit status 1 (26.1665ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-894000

                                                
                                                
** /stderr **
functional_test.go:1616: "kubectl --context functional-894000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1618: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-894000 -n functional-894000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-894000 -n functional-894000: exit status 7 (29.546208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-894000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-894000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-894000 -n functional-894000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-894000 -n functional-894000: exit status 7 (30.7465ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-894000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 ssh "echo hello"
functional_test.go:1725: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 ssh "echo hello": exit status 83 (46.023834ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test.go:1730: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-894000 ssh \"echo hello\"" : exit status 83
functional_test.go:1734: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-894000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-894000\"\n"*. args "out/minikube-darwin-arm64 -p functional-894000 ssh \"echo hello\""
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 ssh "cat /etc/hostname"
functional_test.go:1742: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 ssh "cat /etc/hostname": exit status 83 (40.905375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test.go:1748: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-894000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1752: expected minikube ssh command output to be -"functional-894000"- but got *"* The control-plane node functional-894000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-894000\"\n"*. args "out/minikube-darwin-arm64 -p functional-894000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-894000 -n functional-894000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-894000 -n functional-894000: exit status 7 (29.80825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-894000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (53.717542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-894000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 ssh -n functional-894000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 ssh -n functional-894000 "sudo cat /home/docker/cp-test.txt": exit status 83 (41.917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-894000 ssh -n functional-894000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-894000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-894000\"\n",
}, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 cp functional-894000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd4040808212/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 cp functional-894000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd4040808212/001/cp-test.txt: exit status 83 (41.667042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-894000 cp functional-894000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd4040808212/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 ssh -n functional-894000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 ssh -n functional-894000 "sudo cat /home/docker/cp-test.txt": exit status 83 (39.86325ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-894000 ssh -n functional-894000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd4040808212/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"* The control-plane node functional-894000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-894000\"\n",
+ 	"",
)
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (40.728542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-894000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 ssh -n functional-894000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 ssh -n functional-894000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (43.876791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-894000 ssh -n functional-894000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-894000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-894000\"\n",
}, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/6746/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 ssh "sudo cat /etc/test/nested/copy/6746/hosts"
functional_test.go:1931: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 ssh "sudo cat /etc/test/nested/copy/6746/hosts": exit status 83 (40.374292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test.go:1933: out/minikube-darwin-arm64 -p functional-894000 ssh "sudo cat /etc/test/nested/copy/6746/hosts" failed: exit status 83
functional_test.go:1936: file sync test content: * The control-plane node functional-894000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-894000"
functional_test.go:1946: /etc/sync.test content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-894000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-894000\"\n",
}, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-894000 -n functional-894000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-894000 -n functional-894000: exit status 7 (30.695625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-894000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/6746.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 ssh "sudo cat /etc/ssl/certs/6746.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 ssh "sudo cat /etc/ssl/certs/6746.pem": exit status 83 (41.9995ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/6746.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-894000 ssh \"sudo cat /etc/ssl/certs/6746.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/6746.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-894000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-894000"
	"""
)
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/6746.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 ssh "sudo cat /usr/share/ca-certificates/6746.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 ssh "sudo cat /usr/share/ca-certificates/6746.pem": exit status 83 (45.563917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/usr/share/ca-certificates/6746.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-894000 ssh \"sudo cat /usr/share/ca-certificates/6746.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/6746.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-894000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-894000"
	"""
)
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (41.588ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-894000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-894000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-894000"
	"""
)
functional_test.go:1999: Checking for existence of /etc/ssl/certs/67462.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 ssh "sudo cat /etc/ssl/certs/67462.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 ssh "sudo cat /etc/ssl/certs/67462.pem": exit status 83 (39.78975ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/67462.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-894000 ssh \"sudo cat /etc/ssl/certs/67462.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/67462.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-894000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-894000"
	"""
)
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/67462.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 ssh "sudo cat /usr/share/ca-certificates/67462.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 ssh "sudo cat /usr/share/ca-certificates/67462.pem": exit status 83 (40.596917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/usr/share/ca-certificates/67462.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-894000 ssh \"sudo cat /usr/share/ca-certificates/67462.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/67462.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-894000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-894000"
	"""
)
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (38.556125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-894000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-894000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-894000"
	"""
)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-894000 -n functional-894000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-894000 -n functional-894000: exit status 7 (31.114375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-894000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-894000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:219: (dbg) Non-zero exit: kubectl --context functional-894000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (26.658625ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-894000

                                                
                                                
** /stderr **
functional_test.go:221: failed to 'kubectl get nodes' with args "kubectl --context functional-894000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:227: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-894000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-894000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-894000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-894000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-894000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-894000 -n functional-894000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-894000 -n functional-894000: exit status 7 (30.346292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-894000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 ssh "sudo systemctl is-active crio": exit status 83 (39.543ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test.go:2030: output of 
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2033: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-894000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-894000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 version -o=json --components
functional_test.go:2270: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 version -o=json --components: exit status 83 (40.890375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test.go:2272: error version: exit status 83
functional_test.go:2277: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-894000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-894000"
functional_test.go:2277: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-894000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-894000"
functional_test.go:2277: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-894000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-894000"
functional_test.go:2277: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-894000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-894000"
functional_test.go:2277: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-894000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-894000"
functional_test.go:2277: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-894000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-894000"
functional_test.go:2277: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-894000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-894000"
functional_test.go:2277: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-894000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-894000"
functional_test.go:2277: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-894000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-894000"
functional_test.go:2277: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-894000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-894000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-894000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-894000 image ls --format short --alsologtostderr:
I0816 05:21:21.451825    7364 out.go:345] Setting OutFile to fd 1 ...
I0816 05:21:21.451990    7364 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 05:21:21.451994    7364 out.go:358] Setting ErrFile to fd 2...
I0816 05:21:21.451996    7364 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 05:21:21.452116    7364 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
I0816 05:21:21.452535    7364 config.go:182] Loaded profile config "functional-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0816 05:21:21.452593    7364 config.go:182] Loaded profile config "functional-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
functional_test.go:275: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-894000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-894000 image ls --format table --alsologtostderr:
I0816 05:21:21.672729    7376 out.go:345] Setting OutFile to fd 1 ...
I0816 05:21:21.672892    7376 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 05:21:21.672896    7376 out.go:358] Setting ErrFile to fd 2...
I0816 05:21:21.672898    7376 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 05:21:21.673042    7376 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
I0816 05:21:21.673459    7376 config.go:182] Loaded profile config "functional-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0816 05:21:21.673531    7376 config.go:182] Loaded profile config "functional-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
functional_test.go:275: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-894000 image ls --format json --alsologtostderr:
[]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-894000 image ls --format json --alsologtostderr:
I0816 05:21:21.637325    7374 out.go:345] Setting OutFile to fd 1 ...
I0816 05:21:21.637461    7374 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 05:21:21.637465    7374 out.go:358] Setting ErrFile to fd 2...
I0816 05:21:21.637467    7374 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 05:21:21.637588    7374 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
I0816 05:21:21.637971    7374 config.go:182] Loaded profile config "functional-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0816 05:21:21.638034    7374 config.go:182] Loaded profile config "functional-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
functional_test.go:275: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-894000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-894000 image ls --format yaml --alsologtostderr:
I0816 05:21:21.488503    7366 out.go:345] Setting OutFile to fd 1 ...
I0816 05:21:21.488645    7366 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 05:21:21.488649    7366 out.go:358] Setting ErrFile to fd 2...
I0816 05:21:21.488651    7366 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 05:21:21.488815    7366 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
I0816 05:21:21.489206    7366 config.go:182] Loaded profile config "functional-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0816 05:21:21.489271    7366 config.go:182] Loaded profile config "functional-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
functional_test.go:275: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 ssh pgrep buildkitd: exit status 83 (41.874708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 image build -t localhost/my-image:functional-894000 testdata/build --alsologtostderr
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-894000 image build -t localhost/my-image:functional-894000 testdata/build --alsologtostderr:
I0816 05:21:21.564364    7370 out.go:345] Setting OutFile to fd 1 ...
I0816 05:21:21.564942    7370 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 05:21:21.564953    7370 out.go:358] Setting ErrFile to fd 2...
I0816 05:21:21.564956    7370 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 05:21:21.565105    7370 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
I0816 05:21:21.565520    7370 config.go:182] Loaded profile config "functional-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0816 05:21:21.565922    7370 config.go:182] Loaded profile config "functional-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0816 05:21:21.566154    7370 build_images.go:133] succeeded building to: 
I0816 05:21:21.566158    7370 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 image ls
functional_test.go:446: expected "localhost/my-image:functional-894000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-894000 docker-env) && out/minikube-darwin-arm64 status -p functional-894000"
functional_test.go:499: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-894000 docker-env) && out/minikube-darwin-arm64 status -p functional-894000": exit status 1 (43.016083ms)
functional_test.go:505: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 update-context --alsologtostderr -v=2: exit status 83 (42.864541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:21:21.322414    7358 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:21:21.323213    7358 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:21:21.323217    7358 out.go:358] Setting ErrFile to fd 2...
	I0816 05:21:21.323219    7358 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:21:21.323402    7358 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:21:21.323631    7358 mustload.go:65] Loading cluster: functional-894000
	I0816 05:21:21.323857    7358 config.go:182] Loaded profile config "functional-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:21:21.328698    7358 out.go:177] * The control-plane node functional-894000 host is not running: state=Stopped
	I0816 05:21:21.332670    7358 out.go:177]   To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-894000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-894000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-894000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 update-context --alsologtostderr -v=2: exit status 83 (42.566458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:21:21.409477    7362 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:21:21.409658    7362 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:21:21.409661    7362 out.go:358] Setting ErrFile to fd 2...
	I0816 05:21:21.409663    7362 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:21:21.409796    7362 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:21:21.410030    7362 mustload.go:65] Loading cluster: functional-894000
	I0816 05:21:21.410216    7362 config.go:182] Loaded profile config "functional-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:21:21.414692    7362 out.go:177] * The control-plane node functional-894000 host is not running: state=Stopped
	I0816 05:21:21.418726    7362 out.go:177]   To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-894000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-894000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-894000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 update-context --alsologtostderr -v=2: exit status 83 (42.625833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:21:21.365979    7360 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:21:21.366149    7360 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:21:21.366152    7360 out.go:358] Setting ErrFile to fd 2...
	I0816 05:21:21.366155    7360 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:21:21.366289    7360 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:21:21.366521    7360 mustload.go:65] Loading cluster: functional-894000
	I0816 05:21:21.366723    7360 config.go:182] Loaded profile config "functional-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:21:21.371490    7360 out.go:177] * The control-plane node functional-894000 host is not running: state=Stopped
	I0816 05:21:21.375632    7360 out.go:177]   To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-894000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-894000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-894000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-894000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1437: (dbg) Non-zero exit: kubectl --context functional-894000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.286959ms)

                                                
                                                
** stderr ** 
	error: context "functional-894000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1443: failed to create hello-node deployment with this command "kubectl --context functional-894000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 service list
functional_test.go:1459: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 service list: exit status 83 (47.9825ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test.go:1461: failed to do service list. args "out/minikube-darwin-arm64 -p functional-894000 service list" : exit status 83
functional_test.go:1464: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-894000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-894000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 service list -o json
functional_test.go:1489: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 service list -o json: exit status 83 (43.879208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test.go:1491: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-894000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 service --namespace=default --https --url hello-node
functional_test.go:1509: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 service --namespace=default --https --url hello-node: exit status 83 (42.860209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test.go:1511: failed to get service url. args "out/minikube-darwin-arm64 -p functional-894000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 service hello-node --url --format={{.IP}}
functional_test.go:1540: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 service hello-node --url --format={{.IP}}: exit status 83 (41.829084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test.go:1542: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-894000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1548: "* The control-plane node functional-894000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-894000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 service hello-node --url
functional_test.go:1559: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 service hello-node --url: exit status 83 (42.69125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test.go:1561: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-894000 service hello-node --url": exit status 83
functional_test.go:1565: found endpoint for hello-node: * The control-plane node functional-894000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-894000"
functional_test.go:1569: failed to parse "* The control-plane node functional-894000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-894000\"": parse "* The control-plane node functional-894000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-894000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-894000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-894000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0816 05:20:42.271293    7164 out.go:345] Setting OutFile to fd 1 ...
I0816 05:20:42.271449    7164 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 05:20:42.271452    7164 out.go:358] Setting ErrFile to fd 2...
I0816 05:20:42.271454    7164 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 05:20:42.271589    7164 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
I0816 05:20:42.271804    7164 mustload.go:65] Loading cluster: functional-894000
I0816 05:20:42.272000    7164 config.go:182] Loaded profile config "functional-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0816 05:20:42.276303    7164 out.go:177] * The control-plane node functional-894000 host is not running: state=Stopped
I0816 05:20:42.287277    7164 out.go:177]   To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
stdout: * The control-plane node functional-894000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-894000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-894000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-894000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-894000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-894000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 7165: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-894000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-894000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-894000": client config: context "functional-894000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (75.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-894000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-894000 get svc nginx-svc: exit status 1 (69.6625ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-894000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-894000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (75.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 image load --daemon kicbase/echo-server:functional-894000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-894000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 image load --daemon kicbase/echo-server:functional-894000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-894000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-894000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 image load --daemon kicbase/echo-server:functional-894000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-894000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 image save kicbase/echo-server:functional-894000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:386: expected "/Users/jenkins/workspace/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-894000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.034087667s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 14 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (37.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (37.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (9.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-912000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-912000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.778650625s)

                                                
                                                
-- stdout --
	* [ha-912000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-912000" primary control-plane node in "ha-912000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-912000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:23:00.735051    7402 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:23:00.735202    7402 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:23:00.735205    7402 out.go:358] Setting ErrFile to fd 2...
	I0816 05:23:00.735207    7402 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:23:00.735363    7402 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:23:00.736491    7402 out.go:352] Setting JSON to false
	I0816 05:23:00.752481    7402 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4949,"bootTime":1723806031,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0816 05:23:00.752545    7402 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:23:00.758974    7402 out.go:177] * [ha-912000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 05:23:00.764918    7402 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:23:00.764944    7402 notify.go:220] Checking for updates...
	I0816 05:23:00.772889    7402 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:23:00.775897    7402 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 05:23:00.778967    7402 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:23:00.781892    7402 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	I0816 05:23:00.784926    7402 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 05:23:00.788164    7402 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:23:00.791876    7402 out.go:177] * Using the qemu2 driver based on user configuration
	I0816 05:23:00.798930    7402 start.go:297] selected driver: qemu2
	I0816 05:23:00.798940    7402 start.go:901] validating driver "qemu2" against <nil>
	I0816 05:23:00.798948    7402 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:23:00.801303    7402 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 05:23:00.803919    7402 out.go:177] * Automatically selected the socket_vmnet network
	I0816 05:23:00.806999    7402 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 05:23:00.807048    7402 cni.go:84] Creating CNI manager for ""
	I0816 05:23:00.807053    7402 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0816 05:23:00.807062    7402 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0816 05:23:00.807118    7402 start.go:340] cluster config:
	{Name:ha-912000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-912000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:23:00.810773    7402 iso.go:125] acquiring lock: {Name:mkee7fdae783c25a15c40888f5bdc01a171155d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:23:00.817911    7402 out.go:177] * Starting "ha-912000" primary control-plane node in "ha-912000" cluster
	I0816 05:23:00.821926    7402 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:23:00.821941    7402 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 05:23:00.821946    7402 cache.go:56] Caching tarball of preloaded images
	I0816 05:23:00.822002    7402 preload.go:172] Found /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 05:23:00.822008    7402 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 05:23:00.822251    7402 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/ha-912000/config.json ...
	I0816 05:23:00.822265    7402 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/ha-912000/config.json: {Name:mkb2de98b5f4c7c56062c6207d5adf23193f00c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:23:00.822603    7402 start.go:360] acquireMachinesLock for ha-912000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:23:00.822638    7402 start.go:364] duration metric: took 29.708µs to acquireMachinesLock for "ha-912000"
	I0816 05:23:00.822651    7402 start.go:93] Provisioning new machine with config: &{Name:ha-912000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.0 ClusterName:ha-912000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:23:00.822680    7402 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:23:00.830925    7402 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 05:23:00.848788    7402 start.go:159] libmachine.API.Create for "ha-912000" (driver="qemu2")
	I0816 05:23:00.848813    7402 client.go:168] LocalClient.Create starting
	I0816 05:23:00.848877    7402 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:23:00.848917    7402 main.go:141] libmachine: Decoding PEM data...
	I0816 05:23:00.848927    7402 main.go:141] libmachine: Parsing certificate...
	I0816 05:23:00.848967    7402 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:23:00.848991    7402 main.go:141] libmachine: Decoding PEM data...
	I0816 05:23:00.848998    7402 main.go:141] libmachine: Parsing certificate...
	I0816 05:23:00.849390    7402 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:23:01.002035    7402 main.go:141] libmachine: Creating SSH key...
	I0816 05:23:01.054372    7402 main.go:141] libmachine: Creating Disk image...
	I0816 05:23:01.054377    7402 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:23:01.054586    7402 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/ha-912000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/ha-912000/disk.qcow2
	I0816 05:23:01.063780    7402 main.go:141] libmachine: STDOUT: 
	I0816 05:23:01.063799    7402 main.go:141] libmachine: STDERR: 
	I0816 05:23:01.063866    7402 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/ha-912000/disk.qcow2 +20000M
	I0816 05:23:01.071842    7402 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:23:01.071854    7402 main.go:141] libmachine: STDERR: 
	I0816 05:23:01.071871    7402 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/ha-912000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/ha-912000/disk.qcow2
	I0816 05:23:01.071876    7402 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:23:01.071885    7402 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:23:01.071910    7402 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/ha-912000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/ha-912000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/ha-912000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:ac:cc:10:bd:1f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/ha-912000/disk.qcow2
	I0816 05:23:01.073510    7402 main.go:141] libmachine: STDOUT: 
	I0816 05:23:01.073524    7402 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:23:01.073544    7402 client.go:171] duration metric: took 224.729167ms to LocalClient.Create
	I0816 05:23:03.075740    7402 start.go:128] duration metric: took 2.25305125s to createHost
	I0816 05:23:03.075814    7402 start.go:83] releasing machines lock for "ha-912000", held for 2.253188959s
	W0816 05:23:03.075987    7402 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:23:03.092285    7402 out.go:177] * Deleting "ha-912000" in qemu2 ...
	W0816 05:23:03.119915    7402 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:23:03.119950    7402 start.go:729] Will try again in 5 seconds ...
	I0816 05:23:08.122203    7402 start.go:360] acquireMachinesLock for ha-912000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:23:08.122717    7402 start.go:364] duration metric: took 401.834µs to acquireMachinesLock for "ha-912000"
	I0816 05:23:08.122880    7402 start.go:93] Provisioning new machine with config: &{Name:ha-912000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.0 ClusterName:ha-912000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:23:08.123167    7402 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:23:08.133795    7402 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 05:23:08.183519    7402 start.go:159] libmachine.API.Create for "ha-912000" (driver="qemu2")
	I0816 05:23:08.183564    7402 client.go:168] LocalClient.Create starting
	I0816 05:23:08.183668    7402 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:23:08.183775    7402 main.go:141] libmachine: Decoding PEM data...
	I0816 05:23:08.183793    7402 main.go:141] libmachine: Parsing certificate...
	I0816 05:23:08.183857    7402 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:23:08.183900    7402 main.go:141] libmachine: Decoding PEM data...
	I0816 05:23:08.183913    7402 main.go:141] libmachine: Parsing certificate...
	I0816 05:23:08.184564    7402 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:23:08.348481    7402 main.go:141] libmachine: Creating SSH key...
	I0816 05:23:08.416661    7402 main.go:141] libmachine: Creating Disk image...
	I0816 05:23:08.416670    7402 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:23:08.416893    7402 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/ha-912000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/ha-912000/disk.qcow2
	I0816 05:23:08.426335    7402 main.go:141] libmachine: STDOUT: 
	I0816 05:23:08.426354    7402 main.go:141] libmachine: STDERR: 
	I0816 05:23:08.426407    7402 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/ha-912000/disk.qcow2 +20000M
	I0816 05:23:08.434243    7402 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:23:08.434265    7402 main.go:141] libmachine: STDERR: 
	I0816 05:23:08.434280    7402 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/ha-912000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/ha-912000/disk.qcow2
	I0816 05:23:08.434283    7402 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:23:08.434288    7402 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:23:08.434319    7402 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/ha-912000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/ha-912000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/ha-912000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:98:a8:1b:0b:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/ha-912000/disk.qcow2
	I0816 05:23:08.436035    7402 main.go:141] libmachine: STDOUT: 
	I0816 05:23:08.436051    7402 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:23:08.436066    7402 client.go:171] duration metric: took 252.499209ms to LocalClient.Create
	I0816 05:23:10.438288    7402 start.go:128] duration metric: took 2.315031209s to createHost
	I0816 05:23:10.438335    7402 start.go:83] releasing machines lock for "ha-912000", held for 2.315617333s
	W0816 05:23:10.438651    7402 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-912000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-912000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:23:10.448299    7402 out.go:201] 
	W0816 05:23:10.456487    7402 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:23:10.456542    7402 out.go:270] * 
	* 
	W0816 05:23:10.459121    7402 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:23:10.469271    7402 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-912000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-912000 -n ha-912000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-912000 -n ha-912000: exit status 7 (66.699334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-912000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (9.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (71.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-912000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-912000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (60.640917ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-912000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-912000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-912000 -- rollout status deployment/busybox: exit status 1 (57.139667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-912000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-912000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-912000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (57.436042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-912000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-912000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-912000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.44125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-912000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-912000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-912000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.984375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-912000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-912000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-912000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.153333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-912000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-912000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-912000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.872084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-912000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-912000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-912000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.453458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-912000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-912000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-912000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.282916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-912000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-912000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-912000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.793667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-912000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-912000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-912000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.577625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-912000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-912000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-912000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.389ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-912000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-912000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-912000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.834583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-912000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-912000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-912000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.186208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-912000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-912000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-912000 -- exec  -- nslookup kubernetes.default: exit status 1 (57.455709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-912000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-912000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-912000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.219125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-912000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-912000 -n ha-912000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-912000 -n ha-912000: exit status 7 (29.590791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-912000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (71.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-912000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-912000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.257042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-912000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-912000 -n ha-912000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-912000 -n ha-912000: exit status 7 (29.933417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-912000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-912000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-912000 -v=7 --alsologtostderr: exit status 83 (42.4455ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-912000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-912000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:24:21.682373    7476 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:24:21.682960    7476 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:24:21.682973    7476 out.go:358] Setting ErrFile to fd 2...
	I0816 05:24:21.682975    7476 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:24:21.683155    7476 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:24:21.683378    7476 mustload.go:65] Loading cluster: ha-912000
	I0816 05:24:21.683582    7476 config.go:182] Loaded profile config "ha-912000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:24:21.687427    7476 out.go:177] * The control-plane node ha-912000 host is not running: state=Stopped
	I0816 05:24:21.692208    7476 out.go:177]   To start a cluster, run: "minikube start -p ha-912000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-912000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-912000 -n ha-912000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-912000 -n ha-912000: exit status 7 (29.750459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-912000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-912000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-912000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.509041ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-912000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-912000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-912000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-912000 -n ha-912000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-912000 -n ha-912000: exit status 7 (29.985417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-912000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-912000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-912000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-912000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-912000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-912000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-912000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-912000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-912000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-912000 -n ha-912000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-912000 -n ha-912000: exit status 7 (29.745125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-912000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-912000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-912000 status --output json -v=7 --alsologtostderr: exit status 7 (30.4025ms)

                                                
                                                
-- stdout --
	{"Name":"ha-912000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:24:21.887437    7488 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:24:21.887586    7488 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:24:21.887590    7488 out.go:358] Setting ErrFile to fd 2...
	I0816 05:24:21.887592    7488 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:24:21.887724    7488 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:24:21.887852    7488 out.go:352] Setting JSON to true
	I0816 05:24:21.887866    7488 mustload.go:65] Loading cluster: ha-912000
	I0816 05:24:21.887912    7488 notify.go:220] Checking for updates...
	I0816 05:24:21.888082    7488 config.go:182] Loaded profile config "ha-912000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:24:21.888088    7488 status.go:255] checking status of ha-912000 ...
	I0816 05:24:21.888286    7488 status.go:330] ha-912000 host status = "Stopped" (err=<nil>)
	I0816 05:24:21.888289    7488 status.go:343] host is not running, skipping remaining checks
	I0816 05:24:21.888292    7488 status.go:257] ha-912000 status: &{Name:ha-912000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-912000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-912000 -n ha-912000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-912000 -n ha-912000: exit status 7 (29.218625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-912000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-912000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-912000 node stop m02 -v=7 --alsologtostderr: exit status 85 (45.176916ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:24:21.947098    7492 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:24:21.947700    7492 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:24:21.947704    7492 out.go:358] Setting ErrFile to fd 2...
	I0816 05:24:21.947707    7492 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:24:21.947854    7492 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:24:21.948092    7492 mustload.go:65] Loading cluster: ha-912000
	I0816 05:24:21.948288    7492 config.go:182] Loaded profile config "ha-912000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:24:21.952786    7492 out.go:201] 
	W0816 05:24:21.955739    7492 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0816 05:24:21.955743    7492 out.go:270] * 
	* 
	W0816 05:24:21.957877    7492 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:24:21.961745    7492 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-912000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-912000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-912000 status -v=7 --alsologtostderr: exit status 7 (30.198709ms)

                                                
                                                
-- stdout --
	ha-912000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:24:21.992842    7494 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:24:21.992989    7494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:24:21.992993    7494 out.go:358] Setting ErrFile to fd 2...
	I0816 05:24:21.993002    7494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:24:21.993147    7494 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:24:21.993265    7494 out.go:352] Setting JSON to false
	I0816 05:24:21.993277    7494 mustload.go:65] Loading cluster: ha-912000
	I0816 05:24:21.993325    7494 notify.go:220] Checking for updates...
	I0816 05:24:21.993489    7494 config.go:182] Loaded profile config "ha-912000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:24:21.993495    7494 status.go:255] checking status of ha-912000 ...
	I0816 05:24:21.993717    7494 status.go:330] ha-912000 host status = "Stopped" (err=<nil>)
	I0816 05:24:21.993721    7494 status.go:343] host is not running, skipping remaining checks
	I0816 05:24:21.993724    7494 status.go:257] ha-912000 status: &{Name:ha-912000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-912000 status -v=7 --alsologtostderr": ha-912000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-912000 status -v=7 --alsologtostderr": ha-912000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-912000 status -v=7 --alsologtostderr": ha-912000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-912000 status -v=7 --alsologtostderr": ha-912000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-912000 -n ha-912000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-912000 -n ha-912000: exit status 7 (30.124583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-912000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-912000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-912000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-912000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-912000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-912000 -n ha-912000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-912000 -n ha-912000: exit status 7 (30.059041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-912000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (38.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-912000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-912000 node start m02 -v=7 --alsologtostderr: exit status 85 (46.522917ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:24:22.130378    7503 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:24:22.130943    7503 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:24:22.130947    7503 out.go:358] Setting ErrFile to fd 2...
	I0816 05:24:22.130950    7503 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:24:22.131118    7503 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:24:22.131327    7503 mustload.go:65] Loading cluster: ha-912000
	I0816 05:24:22.131532    7503 config.go:182] Loaded profile config "ha-912000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:24:22.135755    7503 out.go:201] 
	W0816 05:24:22.139782    7503 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0816 05:24:22.139786    7503 out.go:270] * 
	* 
	W0816 05:24:22.141745    7503 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:24:22.143376    7503 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:422: I0816 05:24:22.130378    7503 out.go:345] Setting OutFile to fd 1 ...
I0816 05:24:22.130943    7503 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 05:24:22.130947    7503 out.go:358] Setting ErrFile to fd 2...
I0816 05:24:22.130950    7503 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 05:24:22.131118    7503 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
I0816 05:24:22.131327    7503 mustload.go:65] Loading cluster: ha-912000
I0816 05:24:22.131532    7503 config.go:182] Loaded profile config "ha-912000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0816 05:24:22.135755    7503 out.go:201] 
W0816 05:24:22.139782    7503 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0816 05:24:22.139786    7503 out.go:270] * 
* 
W0816 05:24:22.141745    7503 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0816 05:24:22.143376    7503 out.go:201] 

                                                
                                                
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-912000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-912000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-912000 status -v=7 --alsologtostderr: exit status 7 (30.803125ms)

                                                
                                                
-- stdout --
	ha-912000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:24:22.177883    7505 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:24:22.178041    7505 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:24:22.178044    7505 out.go:358] Setting ErrFile to fd 2...
	I0816 05:24:22.178046    7505 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:24:22.178180    7505 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:24:22.178289    7505 out.go:352] Setting JSON to false
	I0816 05:24:22.178300    7505 mustload.go:65] Loading cluster: ha-912000
	I0816 05:24:22.178353    7505 notify.go:220] Checking for updates...
	I0816 05:24:22.178484    7505 config.go:182] Loaded profile config "ha-912000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:24:22.178495    7505 status.go:255] checking status of ha-912000 ...
	I0816 05:24:22.178706    7505 status.go:330] ha-912000 host status = "Stopped" (err=<nil>)
	I0816 05:24:22.178710    7505 status.go:343] host is not running, skipping remaining checks
	I0816 05:24:22.178712    7505 status.go:257] ha-912000 status: &{Name:ha-912000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-912000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-912000 status -v=7 --alsologtostderr: exit status 7 (75.720208ms)

                                                
                                                
-- stdout --
	ha-912000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:24:23.261451    7509 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:24:23.261639    7509 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:24:23.261643    7509 out.go:358] Setting ErrFile to fd 2...
	I0816 05:24:23.261646    7509 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:24:23.261815    7509 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:24:23.261972    7509 out.go:352] Setting JSON to false
	I0816 05:24:23.261985    7509 mustload.go:65] Loading cluster: ha-912000
	I0816 05:24:23.262028    7509 notify.go:220] Checking for updates...
	I0816 05:24:23.262250    7509 config.go:182] Loaded profile config "ha-912000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:24:23.262258    7509 status.go:255] checking status of ha-912000 ...
	I0816 05:24:23.262556    7509 status.go:330] ha-912000 host status = "Stopped" (err=<nil>)
	I0816 05:24:23.262560    7509 status.go:343] host is not running, skipping remaining checks
	I0816 05:24:23.262563    7509 status.go:257] ha-912000 status: &{Name:ha-912000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-912000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-912000 status -v=7 --alsologtostderr: exit status 7 (74.0775ms)

                                                
                                                
-- stdout --
	ha-912000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:24:24.806641    7511 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:24:24.806838    7511 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:24:24.806842    7511 out.go:358] Setting ErrFile to fd 2...
	I0816 05:24:24.806845    7511 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:24:24.807012    7511 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:24:24.807163    7511 out.go:352] Setting JSON to false
	I0816 05:24:24.807178    7511 mustload.go:65] Loading cluster: ha-912000
	I0816 05:24:24.807213    7511 notify.go:220] Checking for updates...
	I0816 05:24:24.807425    7511 config.go:182] Loaded profile config "ha-912000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:24:24.807435    7511 status.go:255] checking status of ha-912000 ...
	I0816 05:24:24.807716    7511 status.go:330] ha-912000 host status = "Stopped" (err=<nil>)
	I0816 05:24:24.807721    7511 status.go:343] host is not running, skipping remaining checks
	I0816 05:24:24.807724    7511 status.go:257] ha-912000 status: &{Name:ha-912000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-912000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-912000 status -v=7 --alsologtostderr: exit status 7 (73.416208ms)

                                                
                                                
-- stdout --
	ha-912000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:24:26.059035    7513 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:24:26.059233    7513 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:24:26.059237    7513 out.go:358] Setting ErrFile to fd 2...
	I0816 05:24:26.059240    7513 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:24:26.059433    7513 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:24:26.059589    7513 out.go:352] Setting JSON to false
	I0816 05:24:26.059603    7513 mustload.go:65] Loading cluster: ha-912000
	I0816 05:24:26.059642    7513 notify.go:220] Checking for updates...
	I0816 05:24:26.059875    7513 config.go:182] Loaded profile config "ha-912000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:24:26.059886    7513 status.go:255] checking status of ha-912000 ...
	I0816 05:24:26.060179    7513 status.go:330] ha-912000 host status = "Stopped" (err=<nil>)
	I0816 05:24:26.060184    7513 status.go:343] host is not running, skipping remaining checks
	I0816 05:24:26.060188    7513 status.go:257] ha-912000 status: &{Name:ha-912000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-912000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-912000 status -v=7 --alsologtostderr: exit status 7 (75.157792ms)

                                                
                                                
-- stdout --
	ha-912000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:24:30.096173    7515 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:24:30.096371    7515 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:24:30.096375    7515 out.go:358] Setting ErrFile to fd 2...
	I0816 05:24:30.096378    7515 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:24:30.096555    7515 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:24:30.096714    7515 out.go:352] Setting JSON to false
	I0816 05:24:30.096730    7515 mustload.go:65] Loading cluster: ha-912000
	I0816 05:24:30.096765    7515 notify.go:220] Checking for updates...
	I0816 05:24:30.096998    7515 config.go:182] Loaded profile config "ha-912000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:24:30.097007    7515 status.go:255] checking status of ha-912000 ...
	I0816 05:24:30.097322    7515 status.go:330] ha-912000 host status = "Stopped" (err=<nil>)
	I0816 05:24:30.097327    7515 status.go:343] host is not running, skipping remaining checks
	I0816 05:24:30.097330    7515 status.go:257] ha-912000 status: &{Name:ha-912000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-912000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-912000 status -v=7 --alsologtostderr: exit status 7 (72.332458ms)

                                                
                                                
-- stdout --
	ha-912000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:24:34.287221    7519 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:24:34.287398    7519 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:24:34.287402    7519 out.go:358] Setting ErrFile to fd 2...
	I0816 05:24:34.287405    7519 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:24:34.287552    7519 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:24:34.287695    7519 out.go:352] Setting JSON to false
	I0816 05:24:34.287709    7519 mustload.go:65] Loading cluster: ha-912000
	I0816 05:24:34.287751    7519 notify.go:220] Checking for updates...
	I0816 05:24:34.287976    7519 config.go:182] Loaded profile config "ha-912000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:24:34.287984    7519 status.go:255] checking status of ha-912000 ...
	I0816 05:24:34.288266    7519 status.go:330] ha-912000 host status = "Stopped" (err=<nil>)
	I0816 05:24:34.288272    7519 status.go:343] host is not running, skipping remaining checks
	I0816 05:24:34.288275    7519 status.go:257] ha-912000 status: &{Name:ha-912000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-912000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-912000 status -v=7 --alsologtostderr: exit status 7 (76.878291ms)

                                                
                                                
-- stdout --
	ha-912000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:24:42.469469    7521 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:24:42.469672    7521 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:24:42.469676    7521 out.go:358] Setting ErrFile to fd 2...
	I0816 05:24:42.469679    7521 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:24:42.469854    7521 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:24:42.470012    7521 out.go:352] Setting JSON to false
	I0816 05:24:42.470026    7521 mustload.go:65] Loading cluster: ha-912000
	I0816 05:24:42.470065    7521 notify.go:220] Checking for updates...
	I0816 05:24:42.470268    7521 config.go:182] Loaded profile config "ha-912000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:24:42.470276    7521 status.go:255] checking status of ha-912000 ...
	I0816 05:24:42.470565    7521 status.go:330] ha-912000 host status = "Stopped" (err=<nil>)
	I0816 05:24:42.470570    7521 status.go:343] host is not running, skipping remaining checks
	I0816 05:24:42.470573    7521 status.go:257] ha-912000 status: &{Name:ha-912000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-912000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-912000 status -v=7 --alsologtostderr: exit status 7 (73.872833ms)

                                                
                                                
-- stdout --
	ha-912000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:24:52.043818    7525 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:24:52.044061    7525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:24:52.044066    7525 out.go:358] Setting ErrFile to fd 2...
	I0816 05:24:52.044075    7525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:24:52.044243    7525 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:24:52.044413    7525 out.go:352] Setting JSON to false
	I0816 05:24:52.044429    7525 mustload.go:65] Loading cluster: ha-912000
	I0816 05:24:52.044459    7525 notify.go:220] Checking for updates...
	I0816 05:24:52.044700    7525 config.go:182] Loaded profile config "ha-912000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:24:52.044709    7525 status.go:255] checking status of ha-912000 ...
	I0816 05:24:52.044997    7525 status.go:330] ha-912000 host status = "Stopped" (err=<nil>)
	I0816 05:24:52.045003    7525 status.go:343] host is not running, skipping remaining checks
	I0816 05:24:52.045005    7525 status.go:257] ha-912000 status: &{Name:ha-912000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-912000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-912000 status -v=7 --alsologtostderr: exit status 7 (73.957583ms)

                                                
                                                
-- stdout --
	ha-912000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:25:00.824653    7527 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:25:00.824875    7527 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:25:00.824879    7527 out.go:358] Setting ErrFile to fd 2...
	I0816 05:25:00.824882    7527 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:25:00.825038    7527 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:25:00.825193    7527 out.go:352] Setting JSON to false
	I0816 05:25:00.825207    7527 mustload.go:65] Loading cluster: ha-912000
	I0816 05:25:00.825251    7527 notify.go:220] Checking for updates...
	I0816 05:25:00.825493    7527 config.go:182] Loaded profile config "ha-912000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:25:00.825504    7527 status.go:255] checking status of ha-912000 ...
	I0816 05:25:00.825774    7527 status.go:330] ha-912000 host status = "Stopped" (err=<nil>)
	I0816 05:25:00.825779    7527 status.go:343] host is not running, skipping remaining checks
	I0816 05:25:00.825782    7527 status.go:257] ha-912000 status: &{Name:ha-912000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-912000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-912000 -n ha-912000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-912000 -n ha-912000: exit status 7 (33.333375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-912000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (38.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-912000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-912000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-912000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-912000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-912000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-912000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-912000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-912000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-912000 -n ha-912000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-912000 -n ha-912000: exit status 7 (29.887166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-912000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-912000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-912000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-912000 -v=7 --alsologtostderr: (3.319676791s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-912000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-912000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.229035416s)

                                                
                                                
-- stdout --
	* [ha-912000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-912000" primary control-plane node in "ha-912000" cluster
	* Restarting existing qemu2 VM for "ha-912000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-912000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:25:04.352686    7556 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:25:04.352849    7556 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:25:04.352854    7556 out.go:358] Setting ErrFile to fd 2...
	I0816 05:25:04.352857    7556 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:25:04.353034    7556 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:25:04.354218    7556 out.go:352] Setting JSON to false
	I0816 05:25:04.373516    7556 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5073,"bootTime":1723806031,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0816 05:25:04.373589    7556 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:25:04.377165    7556 out.go:177] * [ha-912000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 05:25:04.385132    7556 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:25:04.385168    7556 notify.go:220] Checking for updates...
	I0816 05:25:04.393115    7556 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:25:04.397138    7556 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 05:25:04.400150    7556 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:25:04.403183    7556 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	I0816 05:25:04.406108    7556 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 05:25:04.409465    7556 config.go:182] Loaded profile config "ha-912000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:25:04.409515    7556 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:25:04.414120    7556 out.go:177] * Using the qemu2 driver based on existing profile
	I0816 05:25:04.421087    7556 start.go:297] selected driver: qemu2
	I0816 05:25:04.421096    7556 start.go:901] validating driver "qemu2" against &{Name:ha-912000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.0 ClusterName:ha-912000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:25:04.421161    7556 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:25:04.423575    7556 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 05:25:04.423626    7556 cni.go:84] Creating CNI manager for ""
	I0816 05:25:04.423631    7556 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0816 05:25:04.423680    7556 start.go:340] cluster config:
	{Name:ha-912000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-912000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:25:04.427497    7556 iso.go:125] acquiring lock: {Name:mkee7fdae783c25a15c40888f5bdc01a171155d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:25:04.436064    7556 out.go:177] * Starting "ha-912000" primary control-plane node in "ha-912000" cluster
	I0816 05:25:04.440090    7556 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:25:04.440108    7556 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 05:25:04.440118    7556 cache.go:56] Caching tarball of preloaded images
	I0816 05:25:04.440192    7556 preload.go:172] Found /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 05:25:04.440198    7556 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 05:25:04.440268    7556 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/ha-912000/config.json ...
	I0816 05:25:04.440756    7556 start.go:360] acquireMachinesLock for ha-912000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:25:04.440796    7556 start.go:364] duration metric: took 32.083µs to acquireMachinesLock for "ha-912000"
	I0816 05:25:04.440806    7556 start.go:96] Skipping create...Using existing machine configuration
	I0816 05:25:04.440812    7556 fix.go:54] fixHost starting: 
	I0816 05:25:04.440943    7556 fix.go:112] recreateIfNeeded on ha-912000: state=Stopped err=<nil>
	W0816 05:25:04.440955    7556 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 05:25:04.449090    7556 out.go:177] * Restarting existing qemu2 VM for "ha-912000" ...
	I0816 05:25:04.453038    7556 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:25:04.453076    7556 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/ha-912000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/ha-912000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/ha-912000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:98:a8:1b:0b:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/ha-912000/disk.qcow2
	I0816 05:25:04.455180    7556 main.go:141] libmachine: STDOUT: 
	I0816 05:25:04.455204    7556 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:25:04.455233    7556 fix.go:56] duration metric: took 14.437583ms for fixHost
	I0816 05:25:04.455237    7556 start.go:83] releasing machines lock for "ha-912000", held for 14.452333ms
	W0816 05:25:04.455245    7556 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:25:04.455290    7556 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:25:04.455295    7556 start.go:729] Will try again in 5 seconds ...
	I0816 05:25:09.451895    7556 start.go:360] acquireMachinesLock for ha-912000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:25:09.452287    7556 start.go:364] duration metric: took 317µs to acquireMachinesLock for "ha-912000"
	I0816 05:25:09.452422    7556 start.go:96] Skipping create...Using existing machine configuration
	I0816 05:25:09.452442    7556 fix.go:54] fixHost starting: 
	I0816 05:25:09.453166    7556 fix.go:112] recreateIfNeeded on ha-912000: state=Stopped err=<nil>
	W0816 05:25:09.453191    7556 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 05:25:09.457641    7556 out.go:177] * Restarting existing qemu2 VM for "ha-912000" ...
	I0816 05:25:09.463565    7556 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:25:09.463793    7556 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/ha-912000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/ha-912000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/ha-912000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:98:a8:1b:0b:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/ha-912000/disk.qcow2
	I0816 05:25:09.472818    7556 main.go:141] libmachine: STDOUT: 
	I0816 05:25:09.472893    7556 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:25:09.472975    7556 fix.go:56] duration metric: took 20.544875ms for fixHost
	I0816 05:25:09.472992    7556 start.go:83] releasing machines lock for "ha-912000", held for 20.699041ms
	W0816 05:25:09.473259    7556 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-912000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-912000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:25:09.481642    7556 out.go:201] 
	W0816 05:25:09.485634    7556 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:25:09.485678    7556 out.go:270] * 
	* 
	W0816 05:25:09.488330    7556 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:25:09.496511    7556 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-912000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-912000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-912000 -n ha-912000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-912000 -n ha-912000: exit status 7 (33.752542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-912000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-912000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-912000 node delete m03 -v=7 --alsologtostderr: exit status 83 (40.543833ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-912000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-912000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:25:09.640963    7568 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:25:09.641558    7568 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:25:09.641562    7568 out.go:358] Setting ErrFile to fd 2...
	I0816 05:25:09.641564    7568 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:25:09.641712    7568 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:25:09.641927    7568 mustload.go:65] Loading cluster: ha-912000
	I0816 05:25:09.642130    7568 config.go:182] Loaded profile config "ha-912000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:25:09.646958    7568 out.go:177] * The control-plane node ha-912000 host is not running: state=Stopped
	I0816 05:25:09.648370    7568 out.go:177]   To start a cluster, run: "minikube start -p ha-912000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-912000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-912000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-912000 status -v=7 --alsologtostderr: exit status 7 (29.435042ms)

                                                
                                                
-- stdout --
	ha-912000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:25:09.680449    7570 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:25:09.680591    7570 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:25:09.680594    7570 out.go:358] Setting ErrFile to fd 2...
	I0816 05:25:09.680599    7570 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:25:09.680718    7570 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:25:09.680826    7570 out.go:352] Setting JSON to false
	I0816 05:25:09.680837    7570 mustload.go:65] Loading cluster: ha-912000
	I0816 05:25:09.680881    7570 notify.go:220] Checking for updates...
	I0816 05:25:09.681028    7570 config.go:182] Loaded profile config "ha-912000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:25:09.681038    7570 status.go:255] checking status of ha-912000 ...
	I0816 05:25:09.681237    7570 status.go:330] ha-912000 host status = "Stopped" (err=<nil>)
	I0816 05:25:09.681241    7570 status.go:343] host is not running, skipping remaining checks
	I0816 05:25:09.681243    7570 status.go:257] ha-912000 status: &{Name:ha-912000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-912000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-912000 -n ha-912000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-912000 -n ha-912000: exit status 7 (30.11425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-912000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-912000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-912000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-912000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-912000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-912000 -n ha-912000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-912000 -n ha-912000: exit status 7 (30.74525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-912000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (3.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-912000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-912000 stop -v=7 --alsologtostderr: (2.937693792s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-912000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-912000 status -v=7 --alsologtostderr: exit status 7 (65.865542ms)

                                                
                                                
-- stdout --
	ha-912000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:25:12.787442    7597 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:25:12.787947    7597 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:25:12.787953    7597 out.go:358] Setting ErrFile to fd 2...
	I0816 05:25:12.787957    7597 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:25:12.788226    7597 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:25:12.788480    7597 out.go:352] Setting JSON to false
	I0816 05:25:12.788525    7597 mustload.go:65] Loading cluster: ha-912000
	I0816 05:25:12.788637    7597 notify.go:220] Checking for updates...
	I0816 05:25:12.789085    7597 config.go:182] Loaded profile config "ha-912000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:25:12.789101    7597 status.go:255] checking status of ha-912000 ...
	I0816 05:25:12.789353    7597 status.go:330] ha-912000 host status = "Stopped" (err=<nil>)
	I0816 05:25:12.789358    7597 status.go:343] host is not running, skipping remaining checks
	I0816 05:25:12.789361    7597 status.go:257] ha-912000 status: &{Name:ha-912000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-912000 status -v=7 --alsologtostderr": ha-912000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-912000 status -v=7 --alsologtostderr": ha-912000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-912000 status -v=7 --alsologtostderr": ha-912000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-912000 -n ha-912000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-912000 -n ha-912000: exit status 7 (32.350875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-912000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (3.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-912000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-912000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.186417375s)

                                                
                                                
-- stdout --
	* [ha-912000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-912000" primary control-plane node in "ha-912000" cluster
	* Restarting existing qemu2 VM for "ha-912000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-912000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:25:12.850656    7601 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:25:12.850788    7601 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:25:12.850792    7601 out.go:358] Setting ErrFile to fd 2...
	I0816 05:25:12.850795    7601 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:25:12.850920    7601 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:25:12.851899    7601 out.go:352] Setting JSON to false
	I0816 05:25:12.867886    7601 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5081,"bootTime":1723806031,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0816 05:25:12.867951    7601 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:25:12.873276    7601 out.go:177] * [ha-912000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 05:25:12.881235    7601 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:25:12.881269    7601 notify.go:220] Checking for updates...
	I0816 05:25:12.888245    7601 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:25:12.889518    7601 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 05:25:12.892194    7601 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:25:12.895212    7601 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	I0816 05:25:12.902179    7601 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 05:25:12.905433    7601 config.go:182] Loaded profile config "ha-912000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:25:12.905697    7601 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:25:12.909232    7601 out.go:177] * Using the qemu2 driver based on existing profile
	I0816 05:25:12.916190    7601 start.go:297] selected driver: qemu2
	I0816 05:25:12.916202    7601 start.go:901] validating driver "qemu2" against &{Name:ha-912000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.0 ClusterName:ha-912000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:25:12.916271    7601 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:25:12.918685    7601 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 05:25:12.918711    7601 cni.go:84] Creating CNI manager for ""
	I0816 05:25:12.918716    7601 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0816 05:25:12.918761    7601 start.go:340] cluster config:
	{Name:ha-912000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-912000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:25:12.922335    7601 iso.go:125] acquiring lock: {Name:mkee7fdae783c25a15c40888f5bdc01a171155d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:25:12.930138    7601 out.go:177] * Starting "ha-912000" primary control-plane node in "ha-912000" cluster
	I0816 05:25:12.934137    7601 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:25:12.934152    7601 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 05:25:12.934166    7601 cache.go:56] Caching tarball of preloaded images
	I0816 05:25:12.934218    7601 preload.go:172] Found /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 05:25:12.934223    7601 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 05:25:12.934282    7601 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/ha-912000/config.json ...
	I0816 05:25:12.934633    7601 start.go:360] acquireMachinesLock for ha-912000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:25:12.934663    7601 start.go:364] duration metric: took 22.209µs to acquireMachinesLock for "ha-912000"
	I0816 05:25:12.934673    7601 start.go:96] Skipping create...Using existing machine configuration
	I0816 05:25:12.934678    7601 fix.go:54] fixHost starting: 
	I0816 05:25:12.934799    7601 fix.go:112] recreateIfNeeded on ha-912000: state=Stopped err=<nil>
	W0816 05:25:12.934807    7601 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 05:25:12.943172    7601 out.go:177] * Restarting existing qemu2 VM for "ha-912000" ...
	I0816 05:25:12.947073    7601 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:25:12.947109    7601 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/ha-912000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/ha-912000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/ha-912000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:98:a8:1b:0b:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/ha-912000/disk.qcow2
	I0816 05:25:12.949143    7601 main.go:141] libmachine: STDOUT: 
	I0816 05:25:12.949163    7601 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:25:12.949192    7601 fix.go:56] duration metric: took 14.523542ms for fixHost
	I0816 05:25:12.949196    7601 start.go:83] releasing machines lock for "ha-912000", held for 14.53775ms
	W0816 05:25:12.949203    7601 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:25:12.949237    7601 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:25:12.949242    7601 start.go:729] Will try again in 5 seconds ...
	I0816 05:25:17.948763    7601 start.go:360] acquireMachinesLock for ha-912000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:25:17.949243    7601 start.go:364] duration metric: took 407.709µs to acquireMachinesLock for "ha-912000"
	I0816 05:25:17.949371    7601 start.go:96] Skipping create...Using existing machine configuration
	I0816 05:25:17.949389    7601 fix.go:54] fixHost starting: 
	I0816 05:25:17.950041    7601 fix.go:112] recreateIfNeeded on ha-912000: state=Stopped err=<nil>
	W0816 05:25:17.950071    7601 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 05:25:17.954429    7601 out.go:177] * Restarting existing qemu2 VM for "ha-912000" ...
	I0816 05:25:17.958439    7601 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:25:17.958682    7601 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/ha-912000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/ha-912000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/ha-912000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:98:a8:1b:0b:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/ha-912000/disk.qcow2
	I0816 05:25:17.967688    7601 main.go:141] libmachine: STDOUT: 
	I0816 05:25:17.967752    7601 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:25:17.967826    7601 fix.go:56] duration metric: took 18.44175ms for fixHost
	I0816 05:25:17.967838    7601 start.go:83] releasing machines lock for "ha-912000", held for 18.577833ms
	W0816 05:25:17.968006    7601 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-912000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-912000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:25:17.977371    7601 out.go:201] 
	W0816 05:25:17.981394    7601 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:25:17.981443    7601 out.go:270] * 
	* 
	W0816 05:25:17.984017    7601 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:25:17.993327    7601 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-912000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-912000 -n ha-912000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-912000 -n ha-912000: exit status 7 (68.221ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-912000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-912000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-912000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-912000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-912000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-912000 -n ha-912000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-912000 -n ha-912000: exit status 7 (29.568625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-912000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-912000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-912000 --control-plane -v=7 --alsologtostderr: exit status 83 (41.900625ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-912000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-912000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:25:18.184005    7619 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:25:18.184166    7619 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:25:18.184169    7619 out.go:358] Setting ErrFile to fd 2...
	I0816 05:25:18.184171    7619 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:25:18.184311    7619 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:25:18.184556    7619 mustload.go:65] Loading cluster: ha-912000
	I0816 05:25:18.184734    7619 config.go:182] Loaded profile config "ha-912000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:25:18.189216    7619 out.go:177] * The control-plane node ha-912000 host is not running: state=Stopped
	I0816 05:25:18.193244    7619 out.go:177]   To start a cluster, run: "minikube start -p ha-912000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-912000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-912000 -n ha-912000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-912000 -n ha-912000: exit status 7 (29.993583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-912000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-912000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-912000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-912000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-912000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-912000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-912000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-912000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-912000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-912000 -n ha-912000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-912000 -n ha-912000: exit status 7 (29.710417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-912000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.08s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-004000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-004000 --driver=qemu2 : exit status 80 (10.013388917s)

                                                
                                                
-- stdout --
	* [image-004000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-004000" primary control-plane node in "image-004000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-004000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-004000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-004000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-004000 -n image-004000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-004000 -n image-004000: exit status 7 (67.699333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-004000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.08s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.81s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-435000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-435000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.809341875s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"933f80b6-767d-458c-97e2-ade7bb085105","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-435000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2d6c376b-0730-4bb6-a367-88c77642dbb8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19423"}}
	{"specversion":"1.0","id":"fade210f-a67e-4078-99d9-665fcc549e5d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig"}}
	{"specversion":"1.0","id":"cafa2ddc-ff94-4783-a3f4-fc1b81c6f246","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"e94bd855-cb4a-41ab-a9ef-f528f198b291","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a569e2b0-9314-46f5-9aaa-17a883b42d32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube"}}
	{"specversion":"1.0","id":"7151e0e8-5a7b-4007-ba74-99d2d73436f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6b067a93-691c-48a6-b02f-c568fc6b026e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"4447755d-6f7c-41a5-b8f1-bfa75c1d1c87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"c8ecd810-15e2-4609-af7a-97fe9617f65e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-435000\" primary control-plane node in \"json-output-435000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"5bc45986-575c-4e6c-a59f-5d5c6afe9433","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"de9fc83d-727e-463d-af04-b6912505828d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-435000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"6bf39e6b-41be-49ee-8c1f-83bc894ee96c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"3a2cbf39-1208-4c2f-b240-8c7bdcb36a91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"6906f1f3-0c48-4110-a370-b7bf5fb34373","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-435000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"011d6420-2a2e-4986-9adf-a8a879020849","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"485430aa-2c15-4eee-b2c1-c3bc7e8e4b16","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-435000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.81s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-435000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-435000 --output=json --user=testUser: exit status 83 (78.94225ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"32bbfc4a-3064-4767-beba-392a9b5fde14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-435000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"152faa2c-3b80-4d20-a81b-c4e3fe5ba3fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-435000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-435000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-435000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-435000 --output=json --user=testUser: exit status 83 (45.411167ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-435000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-435000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-435000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-435000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.22s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-697000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-697000 --driver=qemu2 : exit status 80 (9.930563042s)

                                                
                                                
-- stdout --
	* [first-697000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-697000" primary control-plane node in "first-697000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-697000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-697000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-697000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-16 05:25:50.910975 -0700 PDT m=+388.392312210
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-699000 -n second-699000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-699000 -n second-699000: exit status 85 (79.369916ms)

                                                
                                                
-- stdout --
	* Profile "second-699000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-699000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-699000" host is not running, skipping log retrieval (state="* Profile \"second-699000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-699000\"")
helpers_test.go:175: Cleaning up "second-699000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-699000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-16 05:25:51.099503 -0700 PDT m=+388.580852293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-697000 -n first-697000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-697000 -n first-697000: exit status 7 (30.314833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-697000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-697000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-697000
--- FAIL: TestMinikubeProfile (10.22s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.05s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-521000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-521000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.975325542s)

                                                
                                                
-- stdout --
	* [mount-start-1-521000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-521000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-521000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-521000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-521000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-521000 -n mount-start-1-521000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-521000 -n mount-start-1-521000: exit status 7 (71.521583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-521000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.05s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-569000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-569000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.750574959s)

                                                
                                                
-- stdout --
	* [multinode-569000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-569000" primary control-plane node in "multinode-569000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-569000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:26:01.459896    7758 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:26:01.460004    7758 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:26:01.460008    7758 out.go:358] Setting ErrFile to fd 2...
	I0816 05:26:01.460011    7758 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:26:01.460142    7758 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:26:01.461280    7758 out.go:352] Setting JSON to false
	I0816 05:26:01.477244    7758 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5130,"bootTime":1723806031,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0816 05:26:01.477357    7758 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:26:01.483701    7758 out.go:177] * [multinode-569000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 05:26:01.490648    7758 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:26:01.490771    7758 notify.go:220] Checking for updates...
	I0816 05:26:01.498522    7758 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:26:01.502648    7758 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 05:26:01.506658    7758 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:26:01.509675    7758 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	I0816 05:26:01.512658    7758 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 05:26:01.515862    7758 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:26:01.520533    7758 out.go:177] * Using the qemu2 driver based on user configuration
	I0816 05:26:01.527658    7758 start.go:297] selected driver: qemu2
	I0816 05:26:01.527665    7758 start.go:901] validating driver "qemu2" against <nil>
	I0816 05:26:01.527671    7758 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:26:01.529970    7758 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 05:26:01.533603    7758 out.go:177] * Automatically selected the socket_vmnet network
	I0816 05:26:01.536669    7758 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 05:26:01.536690    7758 cni.go:84] Creating CNI manager for ""
	I0816 05:26:01.536695    7758 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0816 05:26:01.536700    7758 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0816 05:26:01.536735    7758 start.go:340] cluster config:
	{Name:multinode-569000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-569000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:26:01.540345    7758 iso.go:125] acquiring lock: {Name:mkee7fdae783c25a15c40888f5bdc01a171155d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:26:01.548640    7758 out.go:177] * Starting "multinode-569000" primary control-plane node in "multinode-569000" cluster
	I0816 05:26:01.552595    7758 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:26:01.552615    7758 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 05:26:01.552625    7758 cache.go:56] Caching tarball of preloaded images
	I0816 05:26:01.552700    7758 preload.go:172] Found /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 05:26:01.552707    7758 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 05:26:01.552946    7758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/multinode-569000/config.json ...
	I0816 05:26:01.552958    7758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/multinode-569000/config.json: {Name:mk39875874a01d2a3300273735874d39592fb7b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:26:01.553184    7758 start.go:360] acquireMachinesLock for multinode-569000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:26:01.553220    7758 start.go:364] duration metric: took 30.042µs to acquireMachinesLock for "multinode-569000"
	I0816 05:26:01.553234    7758 start.go:93] Provisioning new machine with config: &{Name:multinode-569000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:multinode-569000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:26:01.553268    7758 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:26:01.561598    7758 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 05:26:01.579374    7758 start.go:159] libmachine.API.Create for "multinode-569000" (driver="qemu2")
	I0816 05:26:01.579398    7758 client.go:168] LocalClient.Create starting
	I0816 05:26:01.579458    7758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:26:01.579489    7758 main.go:141] libmachine: Decoding PEM data...
	I0816 05:26:01.579499    7758 main.go:141] libmachine: Parsing certificate...
	I0816 05:26:01.579538    7758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:26:01.579560    7758 main.go:141] libmachine: Decoding PEM data...
	I0816 05:26:01.579569    7758 main.go:141] libmachine: Parsing certificate...
	I0816 05:26:01.579944    7758 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:26:01.734573    7758 main.go:141] libmachine: Creating SSH key...
	I0816 05:26:01.761538    7758 main.go:141] libmachine: Creating Disk image...
	I0816 05:26:01.761543    7758 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:26:01.761776    7758 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/multinode-569000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/multinode-569000/disk.qcow2
	I0816 05:26:01.770913    7758 main.go:141] libmachine: STDOUT: 
	I0816 05:26:01.770934    7758 main.go:141] libmachine: STDERR: 
	I0816 05:26:01.770989    7758 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/multinode-569000/disk.qcow2 +20000M
	I0816 05:26:01.778830    7758 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:26:01.778846    7758 main.go:141] libmachine: STDERR: 
	I0816 05:26:01.778866    7758 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/multinode-569000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/multinode-569000/disk.qcow2
	I0816 05:26:01.778870    7758 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:26:01.778883    7758 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:26:01.778928    7758 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/multinode-569000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/multinode-569000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/multinode-569000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:c4:bf:11:90:1c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/multinode-569000/disk.qcow2
	I0816 05:26:01.780610    7758 main.go:141] libmachine: STDOUT: 
	I0816 05:26:01.780627    7758 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:26:01.780645    7758 client.go:171] duration metric: took 201.251375ms to LocalClient.Create
	I0816 05:26:03.782768    7758 start.go:128] duration metric: took 2.229571542s to createHost
	I0816 05:26:03.782825    7758 start.go:83] releasing machines lock for "multinode-569000", held for 2.229685041s
	W0816 05:26:03.782883    7758 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:26:03.798062    7758 out.go:177] * Deleting "multinode-569000" in qemu2 ...
	W0816 05:26:03.833015    7758 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:26:03.833040    7758 start.go:729] Will try again in 5 seconds ...
	I0816 05:26:08.835045    7758 start.go:360] acquireMachinesLock for multinode-569000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:26:08.835481    7758 start.go:364] duration metric: took 360.542µs to acquireMachinesLock for "multinode-569000"
	I0816 05:26:08.835623    7758 start.go:93] Provisioning new machine with config: &{Name:multinode-569000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:multinode-569000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:26:08.835951    7758 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:26:08.845444    7758 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 05:26:08.895074    7758 start.go:159] libmachine.API.Create for "multinode-569000" (driver="qemu2")
	I0816 05:26:08.895129    7758 client.go:168] LocalClient.Create starting
	I0816 05:26:08.895266    7758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:26:08.895324    7758 main.go:141] libmachine: Decoding PEM data...
	I0816 05:26:08.895342    7758 main.go:141] libmachine: Parsing certificate...
	I0816 05:26:08.895399    7758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:26:08.895441    7758 main.go:141] libmachine: Decoding PEM data...
	I0816 05:26:08.895455    7758 main.go:141] libmachine: Parsing certificate...
	I0816 05:26:08.895978    7758 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:26:09.059239    7758 main.go:141] libmachine: Creating SSH key...
	I0816 05:26:09.116843    7758 main.go:141] libmachine: Creating Disk image...
	I0816 05:26:09.116849    7758 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:26:09.117068    7758 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/multinode-569000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/multinode-569000/disk.qcow2
	I0816 05:26:09.126256    7758 main.go:141] libmachine: STDOUT: 
	I0816 05:26:09.126276    7758 main.go:141] libmachine: STDERR: 
	I0816 05:26:09.126327    7758 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/multinode-569000/disk.qcow2 +20000M
	I0816 05:26:09.134179    7758 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:26:09.134204    7758 main.go:141] libmachine: STDERR: 
	I0816 05:26:09.134218    7758 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/multinode-569000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/multinode-569000/disk.qcow2
	I0816 05:26:09.134223    7758 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:26:09.134235    7758 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:26:09.134260    7758 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/multinode-569000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/multinode-569000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/multinode-569000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:eb:02:c2:b1:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/multinode-569000/disk.qcow2
	I0816 05:26:09.135945    7758 main.go:141] libmachine: STDOUT: 
	I0816 05:26:09.135987    7758 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:26:09.136000    7758 client.go:171] duration metric: took 240.872916ms to LocalClient.Create
	I0816 05:26:11.138109    7758 start.go:128] duration metric: took 2.302201667s to createHost
	I0816 05:26:11.138190    7758 start.go:83] releasing machines lock for "multinode-569000", held for 2.30275675s
	W0816 05:26:11.138543    7758 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-569000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-569000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:26:11.149159    7758 out.go:201] 
	W0816 05:26:11.157471    7758 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:26:11.157521    7758 out.go:270] * 
	* 
	W0816 05:26:11.160341    7758 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:26:11.169164    7758 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-569000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-569000 -n multinode-569000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-569000 -n multinode-569000: exit status 7 (68.280208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-569000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.82s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (91.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-569000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-569000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (59.914917ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-569000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-569000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-569000 -- rollout status deployment/busybox: exit status 1 (56.452958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-569000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-569000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-569000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.876208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-569000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-569000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-569000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.706083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-569000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-569000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-569000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.084708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-569000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-569000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-569000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.793209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-569000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-569000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-569000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.137333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-569000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-569000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-569000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.418292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-569000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-569000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-569000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.316083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-569000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-569000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-569000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.981042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-569000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-569000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-569000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.145292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-569000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-569000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-569000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.433416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-569000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-569000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-569000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.717625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-569000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-569000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-569000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.240833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-569000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-569000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-569000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.919291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-569000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-569000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-569000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.030041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-569000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-569000 -n multinode-569000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-569000 -n multinode-569000: exit status 7 (31.19975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-569000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (91.09s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-569000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-569000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.513875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-569000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-569000 -n multinode-569000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-569000 -n multinode-569000: exit status 7 (29.787292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-569000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-569000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-569000 -v 3 --alsologtostderr: exit status 83 (42.027542ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-569000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-569000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:27:42.456448    7837 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:27:42.456598    7837 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:27:42.456602    7837 out.go:358] Setting ErrFile to fd 2...
	I0816 05:27:42.456604    7837 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:27:42.456729    7837 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:27:42.456951    7837 mustload.go:65] Loading cluster: multinode-569000
	I0816 05:27:42.457138    7837 config.go:182] Loaded profile config "multinode-569000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:27:42.461579    7837 out.go:177] * The control-plane node multinode-569000 host is not running: state=Stopped
	I0816 05:27:42.465670    7837 out.go:177]   To start a cluster, run: "minikube start -p multinode-569000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-569000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-569000 -n multinode-569000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-569000 -n multinode-569000: exit status 7 (30.599375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-569000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-569000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-569000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.293ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-569000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-569000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-569000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-569000 -n multinode-569000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-569000 -n multinode-569000: exit status 7 (29.866ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-569000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-569000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-569000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-569000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"multinode-569000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-569000 -n multinode-569000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-569000 -n multinode-569000: exit status 7 (30.193583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-569000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-569000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-569000 status --output json --alsologtostderr: exit status 7 (30.639708ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-569000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:27:42.664033    7849 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:27:42.664187    7849 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:27:42.664190    7849 out.go:358] Setting ErrFile to fd 2...
	I0816 05:27:42.664192    7849 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:27:42.664332    7849 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:27:42.664457    7849 out.go:352] Setting JSON to true
	I0816 05:27:42.664469    7849 mustload.go:65] Loading cluster: multinode-569000
	I0816 05:27:42.664521    7849 notify.go:220] Checking for updates...
	I0816 05:27:42.664661    7849 config.go:182] Loaded profile config "multinode-569000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:27:42.664669    7849 status.go:255] checking status of multinode-569000 ...
	I0816 05:27:42.664872    7849 status.go:330] multinode-569000 host status = "Stopped" (err=<nil>)
	I0816 05:27:42.664876    7849 status.go:343] host is not running, skipping remaining checks
	I0816 05:27:42.664878    7849 status.go:257] multinode-569000 status: &{Name:multinode-569000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-569000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-569000 -n multinode-569000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-569000 -n multinode-569000: exit status 7 (29.196167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-569000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-569000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-569000 node stop m03: exit status 85 (47.839917ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-569000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-569000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-569000 status: exit status 7 (31.061417ms)

                                                
                                                
-- stdout --
	multinode-569000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-569000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-569000 status --alsologtostderr: exit status 7 (30.748459ms)

                                                
                                                
-- stdout --
	multinode-569000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:27:42.803709    7857 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:27:42.803849    7857 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:27:42.803854    7857 out.go:358] Setting ErrFile to fd 2...
	I0816 05:27:42.803857    7857 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:27:42.803987    7857 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:27:42.804108    7857 out.go:352] Setting JSON to false
	I0816 05:27:42.804120    7857 mustload.go:65] Loading cluster: multinode-569000
	I0816 05:27:42.804172    7857 notify.go:220] Checking for updates...
	I0816 05:27:42.804319    7857 config.go:182] Loaded profile config "multinode-569000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:27:42.804327    7857 status.go:255] checking status of multinode-569000 ...
	I0816 05:27:42.804533    7857 status.go:330] multinode-569000 host status = "Stopped" (err=<nil>)
	I0816 05:27:42.804537    7857 status.go:343] host is not running, skipping remaining checks
	I0816 05:27:42.804539    7857 status.go:257] multinode-569000 status: &{Name:multinode-569000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-569000 status --alsologtostderr": multinode-569000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-569000 -n multinode-569000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-569000 -n multinode-569000: exit status 7 (30.309083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-569000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (37.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-569000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-569000 node start m03 -v=7 --alsologtostderr: exit status 85 (46.325958ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:27:42.864796    7861 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:27:42.865167    7861 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:27:42.865172    7861 out.go:358] Setting ErrFile to fd 2...
	I0816 05:27:42.865179    7861 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:27:42.865317    7861 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:27:42.865550    7861 mustload.go:65] Loading cluster: multinode-569000
	I0816 05:27:42.865752    7861 config.go:182] Loaded profile config "multinode-569000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:27:42.868524    7861 out.go:201] 
	W0816 05:27:42.872561    7861 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0816 05:27:42.872567    7861 out.go:270] * 
	* 
	W0816 05:27:42.874511    7861 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:27:42.877454    7861 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0816 05:27:42.864796    7861 out.go:345] Setting OutFile to fd 1 ...
I0816 05:27:42.865167    7861 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 05:27:42.865172    7861 out.go:358] Setting ErrFile to fd 2...
I0816 05:27:42.865179    7861 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 05:27:42.865317    7861 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
I0816 05:27:42.865550    7861 mustload.go:65] Loading cluster: multinode-569000
I0816 05:27:42.865752    7861 config.go:182] Loaded profile config "multinode-569000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0816 05:27:42.868524    7861 out.go:201] 
W0816 05:27:42.872561    7861 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0816 05:27:42.872567    7861 out.go:270] * 
* 
W0816 05:27:42.874511    7861 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0816 05:27:42.877454    7861 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-569000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-569000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-569000 status -v=7 --alsologtostderr: exit status 7 (30.676709ms)

                                                
                                                
-- stdout --
	multinode-569000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:27:42.911417    7863 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:27:42.911556    7863 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:27:42.911560    7863 out.go:358] Setting ErrFile to fd 2...
	I0816 05:27:42.911562    7863 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:27:42.911696    7863 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:27:42.911823    7863 out.go:352] Setting JSON to false
	I0816 05:27:42.911834    7863 mustload.go:65] Loading cluster: multinode-569000
	I0816 05:27:42.911895    7863 notify.go:220] Checking for updates...
	I0816 05:27:42.912051    7863 config.go:182] Loaded profile config "multinode-569000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:27:42.912057    7863 status.go:255] checking status of multinode-569000 ...
	I0816 05:27:42.912261    7863 status.go:330] multinode-569000 host status = "Stopped" (err=<nil>)
	I0816 05:27:42.912265    7863 status.go:343] host is not running, skipping remaining checks
	I0816 05:27:42.912267    7863 status.go:257] multinode-569000 status: &{Name:multinode-569000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-569000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-569000 status -v=7 --alsologtostderr: exit status 7 (76.158958ms)

                                                
                                                
-- stdout --
	multinode-569000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:27:43.823309    7865 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:27:43.823493    7865 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:27:43.823498    7865 out.go:358] Setting ErrFile to fd 2...
	I0816 05:27:43.823500    7865 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:27:43.823696    7865 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:27:43.823859    7865 out.go:352] Setting JSON to false
	I0816 05:27:43.823874    7865 mustload.go:65] Loading cluster: multinode-569000
	I0816 05:27:43.823911    7865 notify.go:220] Checking for updates...
	I0816 05:27:43.824125    7865 config.go:182] Loaded profile config "multinode-569000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:27:43.824134    7865 status.go:255] checking status of multinode-569000 ...
	I0816 05:27:43.824419    7865 status.go:330] multinode-569000 host status = "Stopped" (err=<nil>)
	I0816 05:27:43.824424    7865 status.go:343] host is not running, skipping remaining checks
	I0816 05:27:43.824427    7865 status.go:257] multinode-569000 status: &{Name:multinode-569000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-569000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-569000 status -v=7 --alsologtostderr: exit status 7 (75.224459ms)

                                                
                                                
-- stdout --
	multinode-569000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:27:45.940345    7867 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:27:45.940537    7867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:27:45.940541    7867 out.go:358] Setting ErrFile to fd 2...
	I0816 05:27:45.940544    7867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:27:45.940714    7867 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:27:45.940867    7867 out.go:352] Setting JSON to false
	I0816 05:27:45.940881    7867 mustload.go:65] Loading cluster: multinode-569000
	I0816 05:27:45.940912    7867 notify.go:220] Checking for updates...
	I0816 05:27:45.941157    7867 config.go:182] Loaded profile config "multinode-569000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:27:45.941168    7867 status.go:255] checking status of multinode-569000 ...
	I0816 05:27:45.941435    7867 status.go:330] multinode-569000 host status = "Stopped" (err=<nil>)
	I0816 05:27:45.941440    7867 status.go:343] host is not running, skipping remaining checks
	I0816 05:27:45.941443    7867 status.go:257] multinode-569000 status: &{Name:multinode-569000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-569000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-569000 status -v=7 --alsologtostderr: exit status 7 (76.436458ms)

                                                
                                                
-- stdout --
	multinode-569000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:27:48.073427    7871 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:27:48.073604    7871 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:27:48.073609    7871 out.go:358] Setting ErrFile to fd 2...
	I0816 05:27:48.073612    7871 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:27:48.073788    7871 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:27:48.073948    7871 out.go:352] Setting JSON to false
	I0816 05:27:48.073963    7871 mustload.go:65] Loading cluster: multinode-569000
	I0816 05:27:48.073998    7871 notify.go:220] Checking for updates...
	I0816 05:27:48.074231    7871 config.go:182] Loaded profile config "multinode-569000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:27:48.074239    7871 status.go:255] checking status of multinode-569000 ...
	I0816 05:27:48.074518    7871 status.go:330] multinode-569000 host status = "Stopped" (err=<nil>)
	I0816 05:27:48.074523    7871 status.go:343] host is not running, skipping remaining checks
	I0816 05:27:48.074526    7871 status.go:257] multinode-569000 status: &{Name:multinode-569000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-569000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-569000 status -v=7 --alsologtostderr: exit status 7 (73.334584ms)

                                                
                                                
-- stdout --
	multinode-569000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:27:50.041247    7873 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:27:50.041444    7873 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:27:50.041449    7873 out.go:358] Setting ErrFile to fd 2...
	I0816 05:27:50.041452    7873 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:27:50.041619    7873 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:27:50.041803    7873 out.go:352] Setting JSON to false
	I0816 05:27:50.041819    7873 mustload.go:65] Loading cluster: multinode-569000
	I0816 05:27:50.041850    7873 notify.go:220] Checking for updates...
	I0816 05:27:50.042103    7873 config.go:182] Loaded profile config "multinode-569000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:27:50.042112    7873 status.go:255] checking status of multinode-569000 ...
	I0816 05:27:50.042422    7873 status.go:330] multinode-569000 host status = "Stopped" (err=<nil>)
	I0816 05:27:50.042427    7873 status.go:343] host is not running, skipping remaining checks
	I0816 05:27:50.042430    7873 status.go:257] multinode-569000 status: &{Name:multinode-569000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-569000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-569000 status -v=7 --alsologtostderr: exit status 7 (72.6585ms)

                                                
                                                
-- stdout --
	multinode-569000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:27:56.088047    7875 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:27:56.088234    7875 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:27:56.088238    7875 out.go:358] Setting ErrFile to fd 2...
	I0816 05:27:56.088241    7875 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:27:56.088423    7875 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:27:56.088577    7875 out.go:352] Setting JSON to false
	I0816 05:27:56.088592    7875 mustload.go:65] Loading cluster: multinode-569000
	I0816 05:27:56.088639    7875 notify.go:220] Checking for updates...
	I0816 05:27:56.088895    7875 config.go:182] Loaded profile config "multinode-569000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:27:56.088905    7875 status.go:255] checking status of multinode-569000 ...
	I0816 05:27:56.089200    7875 status.go:330] multinode-569000 host status = "Stopped" (err=<nil>)
	I0816 05:27:56.089205    7875 status.go:343] host is not running, skipping remaining checks
	I0816 05:27:56.089209    7875 status.go:257] multinode-569000 status: &{Name:multinode-569000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-569000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-569000 status -v=7 --alsologtostderr: exit status 7 (74.599458ms)

                                                
                                                
-- stdout --
	multinode-569000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:28:01.076435    7877 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:28:01.076644    7877 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:28:01.076648    7877 out.go:358] Setting ErrFile to fd 2...
	I0816 05:28:01.076651    7877 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:28:01.076827    7877 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:28:01.076981    7877 out.go:352] Setting JSON to false
	I0816 05:28:01.076997    7877 mustload.go:65] Loading cluster: multinode-569000
	I0816 05:28:01.077033    7877 notify.go:220] Checking for updates...
	I0816 05:28:01.077268    7877 config.go:182] Loaded profile config "multinode-569000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:28:01.077277    7877 status.go:255] checking status of multinode-569000 ...
	I0816 05:28:01.077563    7877 status.go:330] multinode-569000 host status = "Stopped" (err=<nil>)
	I0816 05:28:01.077568    7877 status.go:343] host is not running, skipping remaining checks
	I0816 05:28:01.077571    7877 status.go:257] multinode-569000 status: &{Name:multinode-569000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-569000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-569000 status -v=7 --alsologtostderr: exit status 7 (74.646208ms)

                                                
                                                
-- stdout --
	multinode-569000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:28:10.861204    7879 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:28:10.861408    7879 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:28:10.861412    7879 out.go:358] Setting ErrFile to fd 2...
	I0816 05:28:10.861415    7879 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:28:10.861600    7879 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:28:10.861748    7879 out.go:352] Setting JSON to false
	I0816 05:28:10.861764    7879 mustload.go:65] Loading cluster: multinode-569000
	I0816 05:28:10.861792    7879 notify.go:220] Checking for updates...
	I0816 05:28:10.861994    7879 config.go:182] Loaded profile config "multinode-569000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:28:10.862004    7879 status.go:255] checking status of multinode-569000 ...
	I0816 05:28:10.862275    7879 status.go:330] multinode-569000 host status = "Stopped" (err=<nil>)
	I0816 05:28:10.862280    7879 status.go:343] host is not running, skipping remaining checks
	I0816 05:28:10.862283    7879 status.go:257] multinode-569000 status: &{Name:multinode-569000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-569000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-569000 status -v=7 --alsologtostderr: exit status 7 (72.80125ms)

                                                
                                                
-- stdout --
	multinode-569000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:28:20.614560    7881 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:28:20.614752    7881 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:28:20.614757    7881 out.go:358] Setting ErrFile to fd 2...
	I0816 05:28:20.614760    7881 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:28:20.614918    7881 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:28:20.615090    7881 out.go:352] Setting JSON to false
	I0816 05:28:20.615105    7881 mustload.go:65] Loading cluster: multinode-569000
	I0816 05:28:20.615139    7881 notify.go:220] Checking for updates...
	I0816 05:28:20.615380    7881 config.go:182] Loaded profile config "multinode-569000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:28:20.615392    7881 status.go:255] checking status of multinode-569000 ...
	I0816 05:28:20.615667    7881 status.go:330] multinode-569000 host status = "Stopped" (err=<nil>)
	I0816 05:28:20.615672    7881 status.go:343] host is not running, skipping remaining checks
	I0816 05:28:20.615675    7881 status.go:257] multinode-569000 status: &{Name:multinode-569000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-569000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-569000 -n multinode-569000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-569000 -n multinode-569000: exit status 7 (33.4035ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-569000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (37.82s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-569000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-569000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-569000: (3.399881042s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-569000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-569000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.225684208s)

                                                
                                                
-- stdout --
	* [multinode-569000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-569000" primary control-plane node in "multinode-569000" cluster
	* Restarting existing qemu2 VM for "multinode-569000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-569000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:28:24.142747    7905 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:28:24.142911    7905 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:28:24.142915    7905 out.go:358] Setting ErrFile to fd 2...
	I0816 05:28:24.142918    7905 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:28:24.143095    7905 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:28:24.144409    7905 out.go:352] Setting JSON to false
	I0816 05:28:24.163794    7905 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5273,"bootTime":1723806031,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0816 05:28:24.163872    7905 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:28:24.168448    7905 out.go:177] * [multinode-569000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 05:28:24.176324    7905 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:28:24.176359    7905 notify.go:220] Checking for updates...
	I0816 05:28:24.183331    7905 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:28:24.187411    7905 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 05:28:24.190382    7905 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:28:24.193348    7905 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	I0816 05:28:24.196304    7905 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 05:28:24.199666    7905 config.go:182] Loaded profile config "multinode-569000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:28:24.199717    7905 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:28:24.203384    7905 out.go:177] * Using the qemu2 driver based on existing profile
	I0816 05:28:24.210346    7905 start.go:297] selected driver: qemu2
	I0816 05:28:24.210353    7905 start.go:901] validating driver "qemu2" against &{Name:multinode-569000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:multinode-569000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:28:24.210408    7905 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:28:24.212933    7905 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 05:28:24.212988    7905 cni.go:84] Creating CNI manager for ""
	I0816 05:28:24.212993    7905 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0816 05:28:24.213045    7905 start.go:340] cluster config:
	{Name:multinode-569000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-569000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:28:24.216956    7905 iso.go:125] acquiring lock: {Name:mkee7fdae783c25a15c40888f5bdc01a171155d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:28:24.225384    7905 out.go:177] * Starting "multinode-569000" primary control-plane node in "multinode-569000" cluster
	I0816 05:28:24.229207    7905 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:28:24.229225    7905 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 05:28:24.229235    7905 cache.go:56] Caching tarball of preloaded images
	I0816 05:28:24.229303    7905 preload.go:172] Found /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 05:28:24.229309    7905 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 05:28:24.229393    7905 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/multinode-569000/config.json ...
	I0816 05:28:24.229849    7905 start.go:360] acquireMachinesLock for multinode-569000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:28:24.229887    7905 start.go:364] duration metric: took 30.833µs to acquireMachinesLock for "multinode-569000"
	I0816 05:28:24.229897    7905 start.go:96] Skipping create...Using existing machine configuration
	I0816 05:28:24.229902    7905 fix.go:54] fixHost starting: 
	I0816 05:28:24.230034    7905 fix.go:112] recreateIfNeeded on multinode-569000: state=Stopped err=<nil>
	W0816 05:28:24.230048    7905 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 05:28:24.238335    7905 out.go:177] * Restarting existing qemu2 VM for "multinode-569000" ...
	I0816 05:28:24.242307    7905 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:28:24.242348    7905 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/multinode-569000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/multinode-569000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/multinode-569000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:eb:02:c2:b1:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/multinode-569000/disk.qcow2
	I0816 05:28:24.244753    7905 main.go:141] libmachine: STDOUT: 
	I0816 05:28:24.244780    7905 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:28:24.244810    7905 fix.go:56] duration metric: took 14.908708ms for fixHost
	I0816 05:28:24.244815    7905 start.go:83] releasing machines lock for "multinode-569000", held for 14.923708ms
	W0816 05:28:24.244823    7905 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:28:24.244862    7905 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:28:24.244868    7905 start.go:729] Will try again in 5 seconds ...
	I0816 05:28:29.246941    7905 start.go:360] acquireMachinesLock for multinode-569000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:28:29.247396    7905 start.go:364] duration metric: took 323.625µs to acquireMachinesLock for "multinode-569000"
	I0816 05:28:29.247550    7905 start.go:96] Skipping create...Using existing machine configuration
	I0816 05:28:29.247572    7905 fix.go:54] fixHost starting: 
	I0816 05:28:29.248333    7905 fix.go:112] recreateIfNeeded on multinode-569000: state=Stopped err=<nil>
	W0816 05:28:29.248359    7905 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 05:28:29.256719    7905 out.go:177] * Restarting existing qemu2 VM for "multinode-569000" ...
	I0816 05:28:29.260753    7905 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:28:29.260987    7905 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/multinode-569000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/multinode-569000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/multinode-569000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:eb:02:c2:b1:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/multinode-569000/disk.qcow2
	I0816 05:28:29.269840    7905 main.go:141] libmachine: STDOUT: 
	I0816 05:28:29.269919    7905 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:28:29.269999    7905 fix.go:56] duration metric: took 22.429958ms for fixHost
	I0816 05:28:29.270020    7905 start.go:83] releasing machines lock for "multinode-569000", held for 22.573125ms
	W0816 05:28:29.270248    7905 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-569000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-569000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:28:29.277686    7905 out.go:201] 
	W0816 05:28:29.281819    7905 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:28:29.281852    7905 out.go:270] * 
	* 
	W0816 05:28:29.284506    7905 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:28:29.292797    7905 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-569000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-569000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-569000 -n multinode-569000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-569000 -n multinode-569000: exit status 7 (32.960542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-569000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.76s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-569000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-569000 node delete m03: exit status 83 (42.745291ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-569000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-569000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-569000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-569000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-569000 status --alsologtostderr: exit status 7 (30.564208ms)

                                                
                                                
-- stdout --
	multinode-569000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:28:29.480439    7919 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:28:29.480586    7919 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:28:29.480589    7919 out.go:358] Setting ErrFile to fd 2...
	I0816 05:28:29.480592    7919 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:28:29.480728    7919 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:28:29.480845    7919 out.go:352] Setting JSON to false
	I0816 05:28:29.480858    7919 mustload.go:65] Loading cluster: multinode-569000
	I0816 05:28:29.480900    7919 notify.go:220] Checking for updates...
	I0816 05:28:29.481066    7919 config.go:182] Loaded profile config "multinode-569000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:28:29.481073    7919 status.go:255] checking status of multinode-569000 ...
	I0816 05:28:29.481282    7919 status.go:330] multinode-569000 host status = "Stopped" (err=<nil>)
	I0816 05:28:29.481285    7919 status.go:343] host is not running, skipping remaining checks
	I0816 05:28:29.481287    7919 status.go:257] multinode-569000 status: &{Name:multinode-569000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-569000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-569000 -n multinode-569000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-569000 -n multinode-569000: exit status 7 (30.476709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-569000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (2.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-569000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-569000 stop: (1.877677875s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-569000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-569000 status: exit status 7 (67.428875ms)

                                                
                                                
-- stdout --
	multinode-569000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-569000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-569000 status --alsologtostderr: exit status 7 (33.263541ms)

                                                
                                                
-- stdout --
	multinode-569000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:28:31.490017    7935 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:28:31.490139    7935 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:28:31.490142    7935 out.go:358] Setting ErrFile to fd 2...
	I0816 05:28:31.490144    7935 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:28:31.490267    7935 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:28:31.490382    7935 out.go:352] Setting JSON to false
	I0816 05:28:31.490393    7935 mustload.go:65] Loading cluster: multinode-569000
	I0816 05:28:31.490450    7935 notify.go:220] Checking for updates...
	I0816 05:28:31.490577    7935 config.go:182] Loaded profile config "multinode-569000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:28:31.490584    7935 status.go:255] checking status of multinode-569000 ...
	I0816 05:28:31.490803    7935 status.go:330] multinode-569000 host status = "Stopped" (err=<nil>)
	I0816 05:28:31.490807    7935 status.go:343] host is not running, skipping remaining checks
	I0816 05:28:31.490810    7935 status.go:257] multinode-569000 status: &{Name:multinode-569000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-569000 status --alsologtostderr": multinode-569000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-569000 status --alsologtostderr": multinode-569000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-569000 -n multinode-569000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-569000 -n multinode-569000: exit status 7 (30.073041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-569000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (2.01s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-569000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-569000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.187637375s)

                                                
                                                
-- stdout --
	* [multinode-569000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-569000" primary control-plane node in "multinode-569000" cluster
	* Restarting existing qemu2 VM for "multinode-569000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-569000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:28:31.550623    7939 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:28:31.550745    7939 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:28:31.550748    7939 out.go:358] Setting ErrFile to fd 2...
	I0816 05:28:31.550751    7939 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:28:31.550908    7939 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:28:31.551921    7939 out.go:352] Setting JSON to false
	I0816 05:28:31.568295    7939 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5280,"bootTime":1723806031,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0816 05:28:31.568364    7939 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:28:31.572241    7939 out.go:177] * [multinode-569000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 05:28:31.579317    7939 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:28:31.579369    7939 notify.go:220] Checking for updates...
	I0816 05:28:31.587166    7939 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:28:31.591199    7939 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 05:28:31.594099    7939 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:28:31.597221    7939 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	I0816 05:28:31.600193    7939 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 05:28:31.603504    7939 config.go:182] Loaded profile config "multinode-569000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:28:31.603803    7939 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:28:31.608186    7939 out.go:177] * Using the qemu2 driver based on existing profile
	I0816 05:28:31.615155    7939 start.go:297] selected driver: qemu2
	I0816 05:28:31.615166    7939 start.go:901] validating driver "qemu2" against &{Name:multinode-569000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:multinode-569000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:28:31.615231    7939 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:28:31.617550    7939 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 05:28:31.617593    7939 cni.go:84] Creating CNI manager for ""
	I0816 05:28:31.617597    7939 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0816 05:28:31.617645    7939 start.go:340] cluster config:
	{Name:multinode-569000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-569000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:28:31.621278    7939 iso.go:125] acquiring lock: {Name:mkee7fdae783c25a15c40888f5bdc01a171155d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:28:31.630022    7939 out.go:177] * Starting "multinode-569000" primary control-plane node in "multinode-569000" cluster
	I0816 05:28:31.634177    7939 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:28:31.634195    7939 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 05:28:31.634201    7939 cache.go:56] Caching tarball of preloaded images
	I0816 05:28:31.634256    7939 preload.go:172] Found /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 05:28:31.634262    7939 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 05:28:31.634338    7939 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/multinode-569000/config.json ...
	I0816 05:28:31.634763    7939 start.go:360] acquireMachinesLock for multinode-569000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:28:31.634795    7939 start.go:364] duration metric: took 25.792µs to acquireMachinesLock for "multinode-569000"
	I0816 05:28:31.634804    7939 start.go:96] Skipping create...Using existing machine configuration
	I0816 05:28:31.634810    7939 fix.go:54] fixHost starting: 
	I0816 05:28:31.634930    7939 fix.go:112] recreateIfNeeded on multinode-569000: state=Stopped err=<nil>
	W0816 05:28:31.634939    7939 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 05:28:31.643143    7939 out.go:177] * Restarting existing qemu2 VM for "multinode-569000" ...
	I0816 05:28:31.647187    7939 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:28:31.647240    7939 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/multinode-569000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/multinode-569000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/multinode-569000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:eb:02:c2:b1:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/multinode-569000/disk.qcow2
	I0816 05:28:31.649335    7939 main.go:141] libmachine: STDOUT: 
	I0816 05:28:31.649357    7939 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:28:31.649389    7939 fix.go:56] duration metric: took 14.580667ms for fixHost
	I0816 05:28:31.649394    7939 start.go:83] releasing machines lock for "multinode-569000", held for 14.594833ms
	W0816 05:28:31.649402    7939 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:28:31.649440    7939 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:28:31.649444    7939 start.go:729] Will try again in 5 seconds ...
	I0816 05:28:36.651563    7939 start.go:360] acquireMachinesLock for multinode-569000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:28:36.651949    7939 start.go:364] duration metric: took 304.625µs to acquireMachinesLock for "multinode-569000"
	I0816 05:28:36.652108    7939 start.go:96] Skipping create...Using existing machine configuration
	I0816 05:28:36.652126    7939 fix.go:54] fixHost starting: 
	I0816 05:28:36.652831    7939 fix.go:112] recreateIfNeeded on multinode-569000: state=Stopped err=<nil>
	W0816 05:28:36.652855    7939 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 05:28:36.657409    7939 out.go:177] * Restarting existing qemu2 VM for "multinode-569000" ...
	I0816 05:28:36.665259    7939 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:28:36.665444    7939 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/multinode-569000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/multinode-569000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/multinode-569000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:eb:02:c2:b1:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/multinode-569000/disk.qcow2
	I0816 05:28:36.674653    7939 main.go:141] libmachine: STDOUT: 
	I0816 05:28:36.674725    7939 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:28:36.674808    7939 fix.go:56] duration metric: took 22.67875ms for fixHost
	I0816 05:28:36.674829    7939 start.go:83] releasing machines lock for "multinode-569000", held for 22.854375ms
	W0816 05:28:36.675043    7939 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-569000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-569000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:28:36.682105    7939 out.go:201] 
	W0816 05:28:36.685370    7939 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:28:36.685421    7939 out.go:270] * 
	* 
	W0816 05:28:36.688125    7939 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:28:36.696260    7939 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-569000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-569000 -n multinode-569000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-569000 -n multinode-569000: exit status 7 (69.345416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-569000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-569000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-569000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-569000-m01 --driver=qemu2 : exit status 80 (10.040825875s)

                                                
                                                
-- stdout --
	* [multinode-569000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-569000-m01" primary control-plane node in "multinode-569000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-569000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-569000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-569000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-569000-m02 --driver=qemu2 : exit status 80 (10.026089791s)

                                                
                                                
-- stdout --
	* [multinode-569000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-569000-m02" primary control-plane node in "multinode-569000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-569000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-569000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-569000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-569000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-569000: exit status 83 (80.643542ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-569000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-569000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-569000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-569000 -n multinode-569000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-569000 -n multinode-569000: exit status 7 (30.941084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-569000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.29s)

                                                
                                    
x
+
TestPreload (10.28s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-105000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-105000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (10.126968083s)

                                                
                                                
-- stdout --
	* [test-preload-105000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-105000" primary control-plane node in "test-preload-105000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-105000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:28:57.210289    7991 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:28:57.210423    7991 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:28:57.210426    7991 out.go:358] Setting ErrFile to fd 2...
	I0816 05:28:57.210429    7991 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:28:57.210562    7991 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:28:57.211790    7991 out.go:352] Setting JSON to false
	I0816 05:28:57.227882    7991 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5306,"bootTime":1723806031,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0816 05:28:57.227960    7991 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:28:57.234135    7991 out.go:177] * [test-preload-105000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 05:28:57.242145    7991 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:28:57.242262    7991 notify.go:220] Checking for updates...
	I0816 05:28:57.250010    7991 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:28:57.253107    7991 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 05:28:57.257064    7991 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:28:57.258502    7991 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	I0816 05:28:57.261109    7991 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 05:28:57.264505    7991 config.go:182] Loaded profile config "multinode-569000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:28:57.264564    7991 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:28:57.268957    7991 out.go:177] * Using the qemu2 driver based on user configuration
	I0816 05:28:57.276134    7991 start.go:297] selected driver: qemu2
	I0816 05:28:57.276144    7991 start.go:901] validating driver "qemu2" against <nil>
	I0816 05:28:57.276151    7991 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:28:57.278594    7991 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 05:28:57.281925    7991 out.go:177] * Automatically selected the socket_vmnet network
	I0816 05:28:57.285206    7991 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 05:28:57.285240    7991 cni.go:84] Creating CNI manager for ""
	I0816 05:28:57.285247    7991 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 05:28:57.285254    7991 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 05:28:57.285277    7991 start.go:340] cluster config:
	{Name:test-preload-105000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-105000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:28:57.289005    7991 iso.go:125] acquiring lock: {Name:mkee7fdae783c25a15c40888f5bdc01a171155d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:28:57.297077    7991 out.go:177] * Starting "test-preload-105000" primary control-plane node in "test-preload-105000" cluster
	I0816 05:28:57.301067    7991 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0816 05:28:57.301158    7991 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/test-preload-105000/config.json ...
	I0816 05:28:57.301179    7991 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/test-preload-105000/config.json: {Name:mk034995f5aae143d116326f82867c90e9b5f9f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:28:57.301180    7991 cache.go:107] acquiring lock: {Name:mk5c55d254d40a3e4481cf23c77eb3cae06c2365 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:28:57.301177    7991 cache.go:107] acquiring lock: {Name:mk0ee725585939851e658401112124e8d27976db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:28:57.301209    7991 cache.go:107] acquiring lock: {Name:mk63c3f7068b8110c49a44db97f2f3f29e0c72e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:28:57.301446    7991 start.go:360] acquireMachinesLock for test-preload-105000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:28:57.301412    7991 cache.go:107] acquiring lock: {Name:mkb86993fa085960aba6e9c6cdcc40ce6a42cc68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:28:57.301418    7991 cache.go:107] acquiring lock: {Name:mkcf5a5903564cd8378f10df5f8a550e065b208d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:28:57.301467    7991 cache.go:107] acquiring lock: {Name:mka707b4f32356dcf96d4d5a2ca7ce1d4e2d0867 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:28:57.301488    7991 start.go:364] duration metric: took 34.5µs to acquireMachinesLock for "test-preload-105000"
	I0816 05:28:57.301472    7991 cache.go:107] acquiring lock: {Name:mk28fa2b35c29cce84c95e1213ea8833e1491659 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:28:57.301522    7991 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0816 05:28:57.301544    7991 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 05:28:57.301506    7991 start.go:93] Provisioning new machine with config: &{Name:test-preload-105000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-105000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:28:57.301569    7991 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:28:57.301525    7991 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0816 05:28:57.301581    7991 cache.go:107] acquiring lock: {Name:mk4f2fa40009b77bf095e0e64be73857c51ce1a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:28:57.301710    7991 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0816 05:28:57.302037    7991 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0816 05:28:57.302043    7991 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0816 05:28:57.306075    7991 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 05:28:57.306751    7991 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0816 05:28:57.310143    7991 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0816 05:28:57.313749    7991 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0816 05:28:57.313905    7991 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0816 05:28:57.314340    7991 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0816 05:28:57.314851    7991 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0816 05:28:57.314882    7991 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 05:28:57.314885    7991 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0816 05:28:57.316501    7991 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0816 05:28:57.316581    7991 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0816 05:28:57.323657    7991 start.go:159] libmachine.API.Create for "test-preload-105000" (driver="qemu2")
	I0816 05:28:57.323687    7991 client.go:168] LocalClient.Create starting
	I0816 05:28:57.323749    7991 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:28:57.323777    7991 main.go:141] libmachine: Decoding PEM data...
	I0816 05:28:57.323785    7991 main.go:141] libmachine: Parsing certificate...
	I0816 05:28:57.323851    7991 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:28:57.323877    7991 main.go:141] libmachine: Decoding PEM data...
	I0816 05:28:57.323885    7991 main.go:141] libmachine: Parsing certificate...
	I0816 05:28:57.324215    7991 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:28:57.496820    7991 main.go:141] libmachine: Creating SSH key...
	I0816 05:28:57.714068    7991 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0816 05:28:57.715695    7991 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0816 05:28:57.718287    7991 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0816 05:28:57.728965    7991 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0816 05:28:57.772200    7991 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	W0816 05:28:57.811264    7991 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0816 05:28:57.811283    7991 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0816 05:28:57.851340    7991 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0816 05:28:57.899348    7991 main.go:141] libmachine: Creating Disk image...
	I0816 05:28:57.899356    7991 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:28:57.899622    7991 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/test-preload-105000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/test-preload-105000/disk.qcow2
	I0816 05:28:57.909519    7991 main.go:141] libmachine: STDOUT: 
	I0816 05:28:57.909536    7991 main.go:141] libmachine: STDERR: 
	I0816 05:28:57.909585    7991 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/test-preload-105000/disk.qcow2 +20000M
	I0816 05:28:57.917630    7991 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:28:57.917645    7991 main.go:141] libmachine: STDERR: 
	I0816 05:28:57.917661    7991 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/test-preload-105000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/test-preload-105000/disk.qcow2
	I0816 05:28:57.917666    7991 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:28:57.917684    7991 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:28:57.917715    7991 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/test-preload-105000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/test-preload-105000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/test-preload-105000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:9a:3a:26:b7:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/test-preload-105000/disk.qcow2
	I0816 05:28:57.919558    7991 main.go:141] libmachine: STDOUT: 
	I0816 05:28:57.919577    7991 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:28:57.919593    7991 client.go:171] duration metric: took 595.910833ms to LocalClient.Create
	I0816 05:28:57.995131    7991 cache.go:157] /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0816 05:28:57.995146    7991 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 693.733958ms
	I0816 05:28:57.995154    7991 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0816 05:28:58.168696    7991 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0816 05:28:58.168759    7991 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0816 05:28:58.423297    7991 cache.go:157] /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0816 05:28:58.423349    7991 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.122190958s
	I0816 05:28:58.423372    7991 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0816 05:28:59.617971    7991 cache.go:157] /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0816 05:28:59.618049    7991 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.316640125s
	I0816 05:28:59.618089    7991 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0816 05:28:59.919836    7991 start.go:128] duration metric: took 2.618285083s to createHost
	I0816 05:28:59.919887    7991 start.go:83] releasing machines lock for "test-preload-105000", held for 2.618426917s
	W0816 05:28:59.919946    7991 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:28:59.933094    7991 out.go:177] * Deleting "test-preload-105000" in qemu2 ...
	W0816 05:28:59.968587    7991 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:28:59.968619    7991 start.go:729] Will try again in 5 seconds ...
	I0816 05:29:00.097854    7991 cache.go:157] /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0816 05:29:00.097912    7991 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.796369083s
	I0816 05:29:00.097950    7991 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0816 05:29:01.836034    7991 cache.go:157] /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0816 05:29:01.836086    7991 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.534975208s
	I0816 05:29:01.836113    7991 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0816 05:29:02.335733    7991 cache.go:157] /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0816 05:29:02.335781    7991 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.034504666s
	I0816 05:29:02.335805    7991 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0816 05:29:03.540183    7991 cache.go:157] /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0816 05:29:03.540235    7991 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.239155917s
	I0816 05:29:03.540260    7991 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0816 05:29:04.968811    7991 start.go:360] acquireMachinesLock for test-preload-105000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:29:04.969337    7991 start.go:364] duration metric: took 447.75µs to acquireMachinesLock for "test-preload-105000"
	I0816 05:29:04.969483    7991 start.go:93] Provisioning new machine with config: &{Name:test-preload-105000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-105000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:29:04.969733    7991 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:29:04.981352    7991 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 05:29:05.031031    7991 start.go:159] libmachine.API.Create for "test-preload-105000" (driver="qemu2")
	I0816 05:29:05.031093    7991 client.go:168] LocalClient.Create starting
	I0816 05:29:05.031210    7991 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:29:05.031274    7991 main.go:141] libmachine: Decoding PEM data...
	I0816 05:29:05.031294    7991 main.go:141] libmachine: Parsing certificate...
	I0816 05:29:05.031353    7991 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:29:05.031397    7991 main.go:141] libmachine: Decoding PEM data...
	I0816 05:29:05.031410    7991 main.go:141] libmachine: Parsing certificate...
	I0816 05:29:05.031876    7991 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:29:05.208704    7991 main.go:141] libmachine: Creating SSH key...
	I0816 05:29:05.248715    7991 main.go:141] libmachine: Creating Disk image...
	I0816 05:29:05.248721    7991 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:29:05.248913    7991 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/test-preload-105000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/test-preload-105000/disk.qcow2
	I0816 05:29:05.258240    7991 main.go:141] libmachine: STDOUT: 
	I0816 05:29:05.258264    7991 main.go:141] libmachine: STDERR: 
	I0816 05:29:05.258312    7991 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/test-preload-105000/disk.qcow2 +20000M
	I0816 05:29:05.266571    7991 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:29:05.266599    7991 main.go:141] libmachine: STDERR: 
	I0816 05:29:05.266612    7991 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/test-preload-105000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/test-preload-105000/disk.qcow2
	I0816 05:29:05.266616    7991 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:29:05.266629    7991 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:29:05.266656    7991 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/test-preload-105000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/test-preload-105000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/test-preload-105000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:b1:97:76:5e:38 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/test-preload-105000/disk.qcow2
	I0816 05:29:05.268427    7991 main.go:141] libmachine: STDOUT: 
	I0816 05:29:05.268445    7991 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:29:05.268459    7991 client.go:171] duration metric: took 237.363292ms to LocalClient.Create
	I0816 05:29:07.269132    7991 start.go:128] duration metric: took 2.29939375s to createHost
	I0816 05:29:07.269188    7991 start.go:83] releasing machines lock for "test-preload-105000", held for 2.299859709s
	W0816 05:29:07.269520    7991 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-105000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-105000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:29:07.276591    7991 out.go:201] 
	W0816 05:29:07.281795    7991 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:29:07.281847    7991 out.go:270] * 
	* 
	W0816 05:29:07.284502    7991 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:29:07.293677    7991 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-105000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-08-16 05:29:07.311565 -0700 PDT m=+584.796938918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-105000 -n test-preload-105000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-105000 -n test-preload-105000: exit status 7 (67.183375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-105000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-105000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-105000
--- FAIL: TestPreload (10.28s)

                                                
                                    
x
+
TestScheduledStopUnix (10.07s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-750000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-750000 --memory=2048 --driver=qemu2 : exit status 80 (9.920364125s)

                                                
                                                
-- stdout --
	* [scheduled-stop-750000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-750000" primary control-plane node in "scheduled-stop-750000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-750000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-750000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-750000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-750000" primary control-plane node in "scheduled-stop-750000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-750000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-750000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-08-16 05:29:17.379634 -0700 PDT m=+594.865173960
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-750000 -n scheduled-stop-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-750000 -n scheduled-stop-750000: exit status 7 (67.453583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-750000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-750000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-750000
--- FAIL: TestScheduledStopUnix (10.07s)

                                                
                                    
x
+
TestSkaffold (12.36s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe1885961810 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe1885961810 version: (1.063013417s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-022000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-022000 --memory=2600 --driver=qemu2 : exit status 80 (9.96940175s)

                                                
                                                
-- stdout --
	* [skaffold-022000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-022000" primary control-plane node in "skaffold-022000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-022000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-022000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-022000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-022000" primary control-plane node in "skaffold-022000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-022000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-022000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-08-16 05:29:29.738993 -0700 PDT m=+607.224736210
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-022000 -n skaffold-022000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-022000 -n skaffold-022000: exit status 7 (61.967041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-022000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-022000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-022000
--- FAIL: TestSkaffold (12.36s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (600.48s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2968842400 start -p running-upgrade-607000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2968842400 start -p running-upgrade-607000 --memory=2200 --vm-driver=qemu2 : (50.757390125s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-607000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-607000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m34.212572875s)

                                                
                                                
-- stdout --
	* [running-upgrade-607000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-607000" primary control-plane node in "running-upgrade-607000" cluster
	* Updating the running qemu2 "running-upgrade-607000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:31:04.165268    8654 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:31:04.165398    8654 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:31:04.165401    8654 out.go:358] Setting ErrFile to fd 2...
	I0816 05:31:04.165404    8654 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:31:04.165560    8654 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:31:04.166668    8654 out.go:352] Setting JSON to false
	I0816 05:31:04.183340    8654 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5433,"bootTime":1723806031,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0816 05:31:04.183402    8654 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:31:04.187866    8654 out.go:177] * [running-upgrade-607000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 05:31:04.195967    8654 notify.go:220] Checking for updates...
	I0816 05:31:04.195977    8654 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:31:04.199920    8654 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:31:04.209892    8654 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 05:31:04.213960    8654 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:31:04.215064    8654 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	I0816 05:31:04.219842    8654 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 05:31:04.226173    8654 config.go:182] Loaded profile config "running-upgrade-607000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0816 05:31:04.229892    8654 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0816 05:31:04.232919    8654 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:31:04.236869    8654 out.go:177] * Using the qemu2 driver based on existing profile
	I0816 05:31:04.242882    8654 start.go:297] selected driver: qemu2
	I0816 05:31:04.242887    8654 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-607000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51173 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-607000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0816 05:31:04.242932    8654 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:31:04.245309    8654 cni.go:84] Creating CNI manager for ""
	I0816 05:31:04.245332    8654 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 05:31:04.245361    8654 start.go:340] cluster config:
	{Name:running-upgrade-607000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51173 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-607000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0816 05:31:04.245412    8654 iso.go:125] acquiring lock: {Name:mkee7fdae783c25a15c40888f5bdc01a171155d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:31:04.249938    8654 out.go:177] * Starting "running-upgrade-607000" primary control-plane node in "running-upgrade-607000" cluster
	I0816 05:31:04.257868    8654 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0816 05:31:04.257903    8654 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0816 05:31:04.257912    8654 cache.go:56] Caching tarball of preloaded images
	I0816 05:31:04.257970    8654 preload.go:172] Found /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 05:31:04.257976    8654 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0816 05:31:04.258029    8654 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/running-upgrade-607000/config.json ...
	I0816 05:31:04.258478    8654 start.go:360] acquireMachinesLock for running-upgrade-607000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:31:04.258517    8654 start.go:364] duration metric: took 30.75µs to acquireMachinesLock for "running-upgrade-607000"
	I0816 05:31:04.258527    8654 start.go:96] Skipping create...Using existing machine configuration
	I0816 05:31:04.258532    8654 fix.go:54] fixHost starting: 
	I0816 05:31:04.259142    8654 fix.go:112] recreateIfNeeded on running-upgrade-607000: state=Running err=<nil>
	W0816 05:31:04.259151    8654 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 05:31:04.261939    8654 out.go:177] * Updating the running qemu2 "running-upgrade-607000" VM ...
	I0816 05:31:04.269853    8654 machine.go:93] provisionDockerMachine start ...
	I0816 05:31:04.269888    8654 main.go:141] libmachine: Using SSH client type: native
	I0816 05:31:04.269991    8654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1009285a0] 0x10092ae00 <nil>  [] 0s} localhost 51141 <nil> <nil>}
	I0816 05:31:04.269995    8654 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 05:31:04.330431    8654 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-607000
	
	I0816 05:31:04.330445    8654 buildroot.go:166] provisioning hostname "running-upgrade-607000"
	I0816 05:31:04.330503    8654 main.go:141] libmachine: Using SSH client type: native
	I0816 05:31:04.330630    8654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1009285a0] 0x10092ae00 <nil>  [] 0s} localhost 51141 <nil> <nil>}
	I0816 05:31:04.330635    8654 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-607000 && echo "running-upgrade-607000" | sudo tee /etc/hostname
	I0816 05:31:04.394171    8654 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-607000
	
	I0816 05:31:04.394222    8654 main.go:141] libmachine: Using SSH client type: native
	I0816 05:31:04.394340    8654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1009285a0] 0x10092ae00 <nil>  [] 0s} localhost 51141 <nil> <nil>}
	I0816 05:31:04.394351    8654 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-607000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-607000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-607000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 05:31:04.456842    8654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 05:31:04.456859    8654 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-6249/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-6249/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-6249/.minikube}
	I0816 05:31:04.456868    8654 buildroot.go:174] setting up certificates
	I0816 05:31:04.456874    8654 provision.go:84] configureAuth start
	I0816 05:31:04.456882    8654 provision.go:143] copyHostCerts
	I0816 05:31:04.456966    8654 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-6249/.minikube/ca.pem, removing ...
	I0816 05:31:04.456974    8654 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-6249/.minikube/ca.pem
	I0816 05:31:04.457323    8654 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-6249/.minikube/ca.pem (1082 bytes)
	I0816 05:31:04.457501    8654 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-6249/.minikube/cert.pem, removing ...
	I0816 05:31:04.457505    8654 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-6249/.minikube/cert.pem
	I0816 05:31:04.457563    8654 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-6249/.minikube/cert.pem (1123 bytes)
	I0816 05:31:04.457665    8654 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-6249/.minikube/key.pem, removing ...
	I0816 05:31:04.457669    8654 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-6249/.minikube/key.pem
	I0816 05:31:04.457716    8654 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-6249/.minikube/key.pem (1679 bytes)
	I0816 05:31:04.457800    8654 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-607000 san=[127.0.0.1 localhost minikube running-upgrade-607000]
	I0816 05:31:04.565887    8654 provision.go:177] copyRemoteCerts
	I0816 05:31:04.565931    8654 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 05:31:04.565941    8654 sshutil.go:53] new ssh client: &{IP:localhost Port:51141 SSHKeyPath:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/running-upgrade-607000/id_rsa Username:docker}
	I0816 05:31:04.599073    8654 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 05:31:04.607296    8654 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0816 05:31:04.613754    8654 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 05:31:04.621047    8654 provision.go:87] duration metric: took 164.168834ms to configureAuth
	I0816 05:31:04.621056    8654 buildroot.go:189] setting minikube options for container-runtime
	I0816 05:31:04.621174    8654 config.go:182] Loaded profile config "running-upgrade-607000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0816 05:31:04.621207    8654 main.go:141] libmachine: Using SSH client type: native
	I0816 05:31:04.621340    8654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1009285a0] 0x10092ae00 <nil>  [] 0s} localhost 51141 <nil> <nil>}
	I0816 05:31:04.621350    8654 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 05:31:04.683140    8654 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 05:31:04.683149    8654 buildroot.go:70] root file system type: tmpfs
	I0816 05:31:04.683201    8654 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 05:31:04.683270    8654 main.go:141] libmachine: Using SSH client type: native
	I0816 05:31:04.683386    8654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1009285a0] 0x10092ae00 <nil>  [] 0s} localhost 51141 <nil> <nil>}
	I0816 05:31:04.683420    8654 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 05:31:04.749569    8654 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 05:31:04.749627    8654 main.go:141] libmachine: Using SSH client type: native
	I0816 05:31:04.749763    8654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1009285a0] 0x10092ae00 <nil>  [] 0s} localhost 51141 <nil> <nil>}
	I0816 05:31:04.749774    8654 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 05:31:04.811989    8654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 05:31:04.812000    8654 machine.go:96] duration metric: took 542.149417ms to provisionDockerMachine
	I0816 05:31:04.812006    8654 start.go:293] postStartSetup for "running-upgrade-607000" (driver="qemu2")
	I0816 05:31:04.812012    8654 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 05:31:04.812066    8654 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 05:31:04.812075    8654 sshutil.go:53] new ssh client: &{IP:localhost Port:51141 SSHKeyPath:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/running-upgrade-607000/id_rsa Username:docker}
	I0816 05:31:04.844465    8654 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 05:31:04.845703    8654 info.go:137] Remote host: Buildroot 2021.02.12
	I0816 05:31:04.845711    8654 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-6249/.minikube/addons for local assets ...
	I0816 05:31:04.845786    8654 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-6249/.minikube/files for local assets ...
	I0816 05:31:04.845895    8654 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-6249/.minikube/files/etc/ssl/certs/67462.pem -> 67462.pem in /etc/ssl/certs
	I0816 05:31:04.846012    8654 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 05:31:04.848958    8654 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/files/etc/ssl/certs/67462.pem --> /etc/ssl/certs/67462.pem (1708 bytes)
	I0816 05:31:04.856271    8654 start.go:296] duration metric: took 44.259584ms for postStartSetup
	I0816 05:31:04.856288    8654 fix.go:56] duration metric: took 597.766458ms for fixHost
	I0816 05:31:04.856326    8654 main.go:141] libmachine: Using SSH client type: native
	I0816 05:31:04.856439    8654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1009285a0] 0x10092ae00 <nil>  [] 0s} localhost 51141 <nil> <nil>}
	I0816 05:31:04.856444    8654 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 05:31:04.917437    8654 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723811464.898161013
	
	I0816 05:31:04.917445    8654 fix.go:216] guest clock: 1723811464.898161013
	I0816 05:31:04.917449    8654 fix.go:229] Guest: 2024-08-16 05:31:04.898161013 -0700 PDT Remote: 2024-08-16 05:31:04.85629 -0700 PDT m=+0.711499126 (delta=41.871013ms)
	I0816 05:31:04.917460    8654 fix.go:200] guest clock delta is within tolerance: 41.871013ms
	I0816 05:31:04.917463    8654 start.go:83] releasing machines lock for "running-upgrade-607000", held for 658.952417ms
	I0816 05:31:04.917522    8654 ssh_runner.go:195] Run: cat /version.json
	I0816 05:31:04.917532    8654 sshutil.go:53] new ssh client: &{IP:localhost Port:51141 SSHKeyPath:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/running-upgrade-607000/id_rsa Username:docker}
	I0816 05:31:04.917522    8654 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 05:31:04.917560    8654 sshutil.go:53] new ssh client: &{IP:localhost Port:51141 SSHKeyPath:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/running-upgrade-607000/id_rsa Username:docker}
	W0816 05:31:04.918075    8654 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51141: connect: connection refused
	I0816 05:31:04.918100    8654 retry.go:31] will retry after 190.900499ms: dial tcp [::1]:51141: connect: connection refused
	W0816 05:31:05.143179    8654 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0816 05:31:05.143271    8654 ssh_runner.go:195] Run: systemctl --version
	I0816 05:31:05.145069    8654 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 05:31:05.146598    8654 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 05:31:05.146620    8654 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0816 05:31:05.149501    8654 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0816 05:31:05.154483    8654 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 05:31:05.154491    8654 start.go:495] detecting cgroup driver to use...
	I0816 05:31:05.154589    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 05:31:05.160620    8654 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0816 05:31:05.163646    8654 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 05:31:05.166614    8654 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 05:31:05.166641    8654 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 05:31:05.169845    8654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 05:31:05.173173    8654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 05:31:05.176779    8654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 05:31:05.179758    8654 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 05:31:05.182633    8654 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 05:31:05.185811    8654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 05:31:05.189113    8654 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 05:31:05.192592    8654 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 05:31:05.195191    8654 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 05:31:05.197687    8654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:31:05.276549    8654 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 05:31:05.283960    8654 start.go:495] detecting cgroup driver to use...
	I0816 05:31:05.284031    8654 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 05:31:05.291936    8654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 05:31:05.297609    8654 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 05:31:05.303839    8654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 05:31:05.310152    8654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 05:31:05.314757    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 05:31:05.320089    8654 ssh_runner.go:195] Run: which cri-dockerd
	I0816 05:31:05.321461    8654 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 05:31:05.324599    8654 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0816 05:31:05.329656    8654 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 05:31:05.405699    8654 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 05:31:05.481650    8654 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 05:31:05.481709    8654 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 05:31:05.487570    8654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:31:05.560000    8654 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 05:31:18.164323    8654 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.604516041s)
	I0816 05:31:18.164387    8654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0816 05:31:18.168751    8654 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0816 05:31:18.175977    8654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 05:31:18.180446    8654 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0816 05:31:18.267581    8654 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0816 05:31:18.329665    8654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:31:18.391458    8654 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0816 05:31:18.398140    8654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 05:31:18.402985    8654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:31:18.467809    8654 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0816 05:31:18.507635    8654 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0816 05:31:18.507699    8654 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0816 05:31:18.509918    8654 start.go:563] Will wait 60s for crictl version
	I0816 05:31:18.509969    8654 ssh_runner.go:195] Run: which crictl
	I0816 05:31:18.511515    8654 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 05:31:18.523521    8654 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0816 05:31:18.523596    8654 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 05:31:18.541670    8654 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 05:31:18.560259    8654 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0816 05:31:18.560475    8654 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0816 05:31:18.561960    8654 kubeadm.go:883] updating cluster {Name:running-upgrade-607000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51173 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-607000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0816 05:31:18.562006    8654 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0816 05:31:18.562044    8654 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 05:31:18.572497    8654 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0816 05:31:18.572504    8654 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0816 05:31:18.572570    8654 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0816 05:31:18.575546    8654 ssh_runner.go:195] Run: which lz4
	I0816 05:31:18.576841    8654 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 05:31:18.578013    8654 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 05:31:18.578024    8654 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0816 05:31:19.486601    8654 docker.go:649] duration metric: took 909.799709ms to copy over tarball
	I0816 05:31:19.486655    8654 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 05:31:20.752574    8654 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.265924292s)
	I0816 05:31:20.752586    8654 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 05:31:20.768182    8654 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0816 05:31:20.771591    8654 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0816 05:31:20.776445    8654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:31:20.838056    8654 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 05:31:22.055970    8654 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.217914958s)
	I0816 05:31:22.056068    8654 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 05:31:22.071854    8654 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0816 05:31:22.071862    8654 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0816 05:31:22.071866    8654 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 05:31:22.076759    8654 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 05:31:22.079175    8654 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0816 05:31:22.081612    8654 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0816 05:31:22.081677    8654 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 05:31:22.083128    8654 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0816 05:31:22.083695    8654 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0816 05:31:22.085775    8654 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0816 05:31:22.085905    8654 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0816 05:31:22.086928    8654 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0816 05:31:22.087369    8654 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0816 05:31:22.088142    8654 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0816 05:31:22.088261    8654 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0816 05:31:22.089395    8654 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0816 05:31:22.089480    8654 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0816 05:31:22.090471    8654 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0816 05:31:22.091053    8654 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0816 05:31:22.478705    8654 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0816 05:31:22.491798    8654 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0816 05:31:22.491825    8654 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0816 05:31:22.491882    8654 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0816 05:31:22.502726    8654 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0816 05:31:22.524038    8654 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0816 05:31:22.534544    8654 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0816 05:31:22.534564    8654 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0816 05:31:22.534611    8654 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0816 05:31:22.537100    8654 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0816 05:31:22.537492    8654 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0816 05:31:22.545351    8654 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0816 05:31:22.551906    8654 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0816 05:31:22.551927    8654 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0816 05:31:22.551979    8654 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0816 05:31:22.553452    8654 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0816 05:31:22.560232    8654 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0816 05:31:22.560252    8654 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0816 05:31:22.560307    8654 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0816 05:31:22.566380    8654 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0816 05:31:22.576449    8654 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0816 05:31:22.576554    8654 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0816 05:31:22.576569    8654 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0816 05:31:22.576573    8654 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0816 05:31:22.576614    8654 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	W0816 05:31:22.580435    8654 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0816 05:31:22.580555    8654 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0816 05:31:22.589361    8654 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0816 05:31:22.589364    8654 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0816 05:31:22.589400    8654 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0816 05:31:22.589440    8654 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0816 05:31:22.604786    8654 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0816 05:31:22.604818    8654 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0816 05:31:22.604838    8654 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0816 05:31:22.604849    8654 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0816 05:31:22.604854    8654 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0816 05:31:22.604865    8654 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0816 05:31:22.604907    8654 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0816 05:31:22.615827    8654 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0816 05:31:22.615942    8654 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0816 05:31:22.617567    8654 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0816 05:31:22.617581    8654 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0816 05:31:22.627030    8654 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0816 05:31:22.627045    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0816 05:31:22.679854    8654 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0816 05:31:22.679885    8654 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0816 05:31:22.679896    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0816 05:31:22.717929    8654 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0816 05:31:22.817220    8654 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0816 05:31:22.817342    8654 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 05:31:22.833128    8654 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0816 05:31:22.833154    8654 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 05:31:22.833216    8654 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 05:31:23.737995    8654 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0816 05:31:23.738472    8654 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0816 05:31:23.744106    8654 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0816 05:31:23.744170    8654 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0816 05:31:23.801375    8654 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0816 05:31:23.801388    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0816 05:31:24.030758    8654 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0816 05:31:24.030797    8654 cache_images.go:92] duration metric: took 1.958957s to LoadCachedImages
	W0816 05:31:24.030835    8654 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0816 05:31:24.030840    8654 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0816 05:31:24.030903    8654 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-607000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-607000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 05:31:24.030959    8654 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0816 05:31:24.045126    8654 cni.go:84] Creating CNI manager for ""
	I0816 05:31:24.045137    8654 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 05:31:24.045142    8654 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 05:31:24.045150    8654 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-607000 NodeName:running-upgrade-607000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 05:31:24.045220    8654 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-607000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 05:31:24.045270    8654 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0816 05:31:24.048368    8654 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 05:31:24.048403    8654 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 05:31:24.051617    8654 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0816 05:31:24.056785    8654 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 05:31:24.061829    8654 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0816 05:31:24.067520    8654 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0816 05:31:24.068872    8654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:31:24.132139    8654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 05:31:24.137460    8654 certs.go:68] Setting up /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/running-upgrade-607000 for IP: 10.0.2.15
	I0816 05:31:24.137468    8654 certs.go:194] generating shared ca certs ...
	I0816 05:31:24.137476    8654 certs.go:226] acquiring lock for ca certs: {Name:mk6cf8af742115923453a119a0b968ea241ec803 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:31:24.137719    8654 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19423-6249/.minikube/ca.key
	I0816 05:31:24.137766    8654 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19423-6249/.minikube/proxy-client-ca.key
	I0816 05:31:24.137771    8654 certs.go:256] generating profile certs ...
	I0816 05:31:24.137843    8654 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/running-upgrade-607000/client.key
	I0816 05:31:24.137855    8654 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/running-upgrade-607000/apiserver.key.1c6c10a5
	I0816 05:31:24.137864    8654 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/running-upgrade-607000/apiserver.crt.1c6c10a5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0816 05:31:24.323532    8654 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/running-upgrade-607000/apiserver.crt.1c6c10a5 ...
	I0816 05:31:24.323543    8654 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/running-upgrade-607000/apiserver.crt.1c6c10a5: {Name:mk793a889e63f67e9de19d525161e7071d53c704 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:31:24.323804    8654 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/running-upgrade-607000/apiserver.key.1c6c10a5 ...
	I0816 05:31:24.323808    8654 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/running-upgrade-607000/apiserver.key.1c6c10a5: {Name:mk16c75368eb33a9179bae97f0f89a5331aa6831 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:31:24.323936    8654 certs.go:381] copying /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/running-upgrade-607000/apiserver.crt.1c6c10a5 -> /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/running-upgrade-607000/apiserver.crt
	I0816 05:31:24.324064    8654 certs.go:385] copying /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/running-upgrade-607000/apiserver.key.1c6c10a5 -> /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/running-upgrade-607000/apiserver.key
	I0816 05:31:24.324211    8654 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/running-upgrade-607000/proxy-client.key
	I0816 05:31:24.324338    8654 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/6746.pem (1338 bytes)
	W0816 05:31:24.324369    8654 certs.go:480] ignoring /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/6746_empty.pem, impossibly tiny 0 bytes
	I0816 05:31:24.324374    8654 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 05:31:24.324400    8654 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem (1082 bytes)
	I0816 05:31:24.324425    8654 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem (1123 bytes)
	I0816 05:31:24.324450    8654 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/key.pem (1679 bytes)
	I0816 05:31:24.324509    8654 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-6249/.minikube/files/etc/ssl/certs/67462.pem (1708 bytes)
	I0816 05:31:24.324907    8654 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 05:31:24.332475    8654 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 05:31:24.339849    8654 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 05:31:24.346642    8654 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 05:31:24.353951    8654 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/running-upgrade-607000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0816 05:31:24.361404    8654 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/running-upgrade-607000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 05:31:24.368735    8654 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/running-upgrade-607000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 05:31:24.376108    8654 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/running-upgrade-607000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 05:31:24.383065    8654 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/6746.pem --> /usr/share/ca-certificates/6746.pem (1338 bytes)
	I0816 05:31:24.390104    8654 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/files/etc/ssl/certs/67462.pem --> /usr/share/ca-certificates/67462.pem (1708 bytes)
	I0816 05:31:24.397217    8654 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 05:31:24.403607    8654 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 05:31:24.408333    8654 ssh_runner.go:195] Run: openssl version
	I0816 05:31:24.409986    8654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6746.pem && ln -fs /usr/share/ca-certificates/6746.pem /etc/ssl/certs/6746.pem"
	I0816 05:31:24.413400    8654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6746.pem
	I0816 05:31:24.414908    8654 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:20 /usr/share/ca-certificates/6746.pem
	I0816 05:31:24.414931    8654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6746.pem
	I0816 05:31:24.416682    8654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6746.pem /etc/ssl/certs/51391683.0"
	I0816 05:31:24.419407    8654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67462.pem && ln -fs /usr/share/ca-certificates/67462.pem /etc/ssl/certs/67462.pem"
	I0816 05:31:24.422452    8654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67462.pem
	I0816 05:31:24.423789    8654 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:20 /usr/share/ca-certificates/67462.pem
	I0816 05:31:24.423813    8654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67462.pem
	I0816 05:31:24.425511    8654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67462.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 05:31:24.428625    8654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 05:31:24.431504    8654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 05:31:24.432968    8654 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:30 /usr/share/ca-certificates/minikubeCA.pem
	I0816 05:31:24.432992    8654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 05:31:24.434951    8654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 05:31:24.437868    8654 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 05:31:24.439415    8654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 05:31:24.441076    8654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 05:31:24.442876    8654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 05:31:24.444555    8654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 05:31:24.446445    8654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 05:31:24.448309    8654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 05:31:24.450157    8654 kubeadm.go:392] StartCluster: {Name:running-upgrade-607000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51173 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-607000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0816 05:31:24.450223    8654 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0816 05:31:24.460557    8654 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 05:31:24.464042    8654 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 05:31:24.464047    8654 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 05:31:24.464072    8654 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 05:31:24.466895    8654 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 05:31:24.466932    8654 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-607000" does not appear in /Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:31:24.466949    8654 kubeconfig.go:62] /Users/jenkins/minikube-integration/19423-6249/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-607000" cluster setting kubeconfig missing "running-upgrade-607000" context setting]
	I0816 05:31:24.467115    8654 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-6249/kubeconfig: {Name:mka7b2a1dac03f0ea4ac28563b4fe884a2b1b206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:31:24.468241    8654 kapi.go:59] client config for running-upgrade-607000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/running-upgrade-607000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/running-upgrade-607000/client.key", CAFile:"/Users/jenkins/minikube-integration/19423-6249/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101ee1610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 05:31:24.469390    8654 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 05:31:24.472637    8654 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-607000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0816 05:31:24.472643    8654 kubeadm.go:1160] stopping kube-system containers ...
	I0816 05:31:24.472685    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0816 05:31:24.483569    8654 docker.go:483] Stopping containers: [25400bb8bf0f 2b719162fbea 2cceb85dd0f0 4e481512b094 547f4d6f9e19 43db198f0476 725c619f20b7 3511e3b1ac0b bbab2d88bc2d 905c40a05b07 caa4cbc22c6d 10831a248c59 8519726a0463 c0a41520e7e6 ea6b749ca9a7 4a673a1b2350]
	I0816 05:31:24.483652    8654 ssh_runner.go:195] Run: docker stop 25400bb8bf0f 2b719162fbea 2cceb85dd0f0 4e481512b094 547f4d6f9e19 43db198f0476 725c619f20b7 3511e3b1ac0b bbab2d88bc2d 905c40a05b07 caa4cbc22c6d 10831a248c59 8519726a0463 c0a41520e7e6 ea6b749ca9a7 4a673a1b2350
	I0816 05:31:24.875407    8654 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 05:31:24.956954    8654 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 05:31:24.964683    8654 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Aug 16 12:30 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Aug 16 12:30 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Aug 16 12:31 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Aug 16 12:30 /etc/kubernetes/scheduler.conf
	
	I0816 05:31:24.964730    8654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51173 /etc/kubernetes/admin.conf
	I0816 05:31:24.973085    8654 kubeadm.go:163] "https://control-plane.minikube.internal:51173" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51173 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0816 05:31:24.973132    8654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 05:31:24.976050    8654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51173 /etc/kubernetes/kubelet.conf
	I0816 05:31:24.978721    8654 kubeadm.go:163] "https://control-plane.minikube.internal:51173" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51173 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0816 05:31:24.978744    8654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 05:31:24.981664    8654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51173 /etc/kubernetes/controller-manager.conf
	I0816 05:31:24.988752    8654 kubeadm.go:163] "https://control-plane.minikube.internal:51173" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51173 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0816 05:31:24.988829    8654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 05:31:24.991779    8654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51173 /etc/kubernetes/scheduler.conf
	I0816 05:31:24.994468    8654 kubeadm.go:163] "https://control-plane.minikube.internal:51173" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51173 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0816 05:31:24.994498    8654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 05:31:24.997288    8654 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 05:31:25.000193    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 05:31:25.068154    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 05:31:25.877771    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 05:31:26.067425    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 05:31:26.094168    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 05:31:26.117033    8654 api_server.go:52] waiting for apiserver process to appear ...
	I0816 05:31:26.117106    8654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 05:31:26.619487    8654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 05:31:27.119146    8654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 05:31:27.123448    8654 api_server.go:72] duration metric: took 1.006432416s to wait for apiserver process to appear ...
	I0816 05:31:27.123458    8654 api_server.go:88] waiting for apiserver healthz status ...
	I0816 05:31:27.123489    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:31:32.125493    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:31:32.125537    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:31:37.125749    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:31:37.125791    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:31:42.126274    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:31:42.126356    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:31:47.127002    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:31:47.127058    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:31:52.127853    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:31:52.127923    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:31:57.129082    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:31:57.129152    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:32:02.130568    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:32:02.130657    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:32:07.131790    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:32:07.131859    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:32:12.134152    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:32:12.134223    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:32:17.134961    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:32:17.135067    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:32:22.137597    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:32:22.137662    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:32:27.140161    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:32:27.140584    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:32:27.175279    8654 logs.go:276] 2 containers: [1c1df0a24283 7da996bebe3e]
	I0816 05:32:27.175411    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:32:27.196221    8654 logs.go:276] 2 containers: [908e9b841803 c5598fa8291b]
	I0816 05:32:27.196378    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:32:27.210786    8654 logs.go:276] 1 containers: [f86c0ca08a29]
	I0816 05:32:27.210866    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:32:27.223215    8654 logs.go:276] 2 containers: [82a7160cf6b3 be9ff0533784]
	I0816 05:32:27.223286    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:32:27.235050    8654 logs.go:276] 1 containers: [41826d2a89be]
	I0816 05:32:27.235123    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:32:27.251669    8654 logs.go:276] 2 containers: [09e3f6eaf95c 258b4e54effd]
	I0816 05:32:27.251746    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:32:27.261609    8654 logs.go:276] 0 containers: []
	W0816 05:32:27.261620    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:32:27.261683    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:32:27.271626    8654 logs.go:276] 2 containers: [da3ee567efaa e4a387b28249]
	I0816 05:32:27.271648    8654 logs.go:123] Gathering logs for etcd [908e9b841803] ...
	I0816 05:32:27.271653    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 908e9b841803"
	I0816 05:32:27.285565    8654 logs.go:123] Gathering logs for coredns [f86c0ca08a29] ...
	I0816 05:32:27.285576    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86c0ca08a29"
	I0816 05:32:27.298201    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:32:27.298212    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:32:27.302437    8654 logs.go:123] Gathering logs for kube-apiserver [1c1df0a24283] ...
	I0816 05:32:27.302443    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c1df0a24283"
	I0816 05:32:27.316141    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:32:27.316151    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:32:27.327777    8654 logs.go:123] Gathering logs for storage-provisioner [da3ee567efaa] ...
	I0816 05:32:27.327790    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da3ee567efaa"
	I0816 05:32:27.339292    8654 logs.go:123] Gathering logs for storage-provisioner [e4a387b28249] ...
	I0816 05:32:27.339305    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a387b28249"
	I0816 05:32:27.350193    8654 logs.go:123] Gathering logs for kube-scheduler [82a7160cf6b3] ...
	I0816 05:32:27.350203    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a7160cf6b3"
	I0816 05:32:27.362351    8654 logs.go:123] Gathering logs for kube-scheduler [be9ff0533784] ...
	I0816 05:32:27.362359    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be9ff0533784"
	I0816 05:32:27.377356    8654 logs.go:123] Gathering logs for kube-controller-manager [09e3f6eaf95c] ...
	I0816 05:32:27.377366    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e3f6eaf95c"
	I0816 05:32:27.394403    8654 logs.go:123] Gathering logs for kube-controller-manager [258b4e54effd] ...
	I0816 05:32:27.394413    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258b4e54effd"
	I0816 05:32:27.405441    8654 logs.go:123] Gathering logs for kube-proxy [41826d2a89be] ...
	I0816 05:32:27.405452    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41826d2a89be"
	I0816 05:32:27.416998    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:32:27.417009    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:32:27.442303    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:32:27.442316    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:32:27.482443    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:32:27.482454    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:32:27.557788    8654 logs.go:123] Gathering logs for kube-apiserver [7da996bebe3e] ...
	I0816 05:32:27.557803    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da996bebe3e"
	I0816 05:32:27.569011    8654 logs.go:123] Gathering logs for etcd [c5598fa8291b] ...
	I0816 05:32:27.569023    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5598fa8291b"
	I0816 05:32:30.082483    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:32:35.082947    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:32:35.083344    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:32:35.115516    8654 logs.go:276] 2 containers: [1c1df0a24283 7da996bebe3e]
	I0816 05:32:35.115644    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:32:35.135226    8654 logs.go:276] 2 containers: [908e9b841803 c5598fa8291b]
	I0816 05:32:35.135327    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:32:35.150262    8654 logs.go:276] 1 containers: [f86c0ca08a29]
	I0816 05:32:35.150335    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:32:35.163535    8654 logs.go:276] 2 containers: [82a7160cf6b3 be9ff0533784]
	I0816 05:32:35.163608    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:32:35.173811    8654 logs.go:276] 1 containers: [41826d2a89be]
	I0816 05:32:35.173885    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:32:35.184360    8654 logs.go:276] 2 containers: [09e3f6eaf95c 258b4e54effd]
	I0816 05:32:35.184421    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:32:35.194701    8654 logs.go:276] 0 containers: []
	W0816 05:32:35.194713    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:32:35.194774    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:32:35.205685    8654 logs.go:276] 2 containers: [da3ee567efaa e4a387b28249]
	I0816 05:32:35.205703    8654 logs.go:123] Gathering logs for storage-provisioner [da3ee567efaa] ...
	I0816 05:32:35.205708    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da3ee567efaa"
	I0816 05:32:35.216984    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:32:35.216996    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:32:35.228500    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:32:35.228515    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:32:35.262483    8654 logs.go:123] Gathering logs for kube-scheduler [82a7160cf6b3] ...
	I0816 05:32:35.262494    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a7160cf6b3"
	I0816 05:32:35.274330    8654 logs.go:123] Gathering logs for etcd [c5598fa8291b] ...
	I0816 05:32:35.274341    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5598fa8291b"
	I0816 05:32:35.285322    8654 logs.go:123] Gathering logs for coredns [f86c0ca08a29] ...
	I0816 05:32:35.285335    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86c0ca08a29"
	I0816 05:32:35.296846    8654 logs.go:123] Gathering logs for kube-scheduler [be9ff0533784] ...
	I0816 05:32:35.296856    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be9ff0533784"
	I0816 05:32:35.311426    8654 logs.go:123] Gathering logs for kube-proxy [41826d2a89be] ...
	I0816 05:32:35.311435    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41826d2a89be"
	I0816 05:32:35.323459    8654 logs.go:123] Gathering logs for kube-apiserver [1c1df0a24283] ...
	I0816 05:32:35.323471    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c1df0a24283"
	I0816 05:32:35.337149    8654 logs.go:123] Gathering logs for etcd [908e9b841803] ...
	I0816 05:32:35.337159    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 908e9b841803"
	I0816 05:32:35.352564    8654 logs.go:123] Gathering logs for kube-controller-manager [258b4e54effd] ...
	I0816 05:32:35.352576    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258b4e54effd"
	I0816 05:32:35.364883    8654 logs.go:123] Gathering logs for storage-provisioner [e4a387b28249] ...
	I0816 05:32:35.364897    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a387b28249"
	I0816 05:32:35.375602    8654 logs.go:123] Gathering logs for kube-apiserver [7da996bebe3e] ...
	I0816 05:32:35.375611    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da996bebe3e"
	I0816 05:32:35.386703    8654 logs.go:123] Gathering logs for kube-controller-manager [09e3f6eaf95c] ...
	I0816 05:32:35.386730    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e3f6eaf95c"
	I0816 05:32:35.404188    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:32:35.404200    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:32:35.429113    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:32:35.429119    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:32:35.469209    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:32:35.469220    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:32:37.975517    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:32:42.978019    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:32:42.978479    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:32:43.020088    8654 logs.go:276] 2 containers: [1c1df0a24283 7da996bebe3e]
	I0816 05:32:43.020226    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:32:43.041246    8654 logs.go:276] 2 containers: [908e9b841803 c5598fa8291b]
	I0816 05:32:43.041345    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:32:43.058761    8654 logs.go:276] 1 containers: [f86c0ca08a29]
	I0816 05:32:43.058843    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:32:43.070535    8654 logs.go:276] 2 containers: [82a7160cf6b3 be9ff0533784]
	I0816 05:32:43.070599    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:32:43.080723    8654 logs.go:276] 1 containers: [41826d2a89be]
	I0816 05:32:43.080793    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:32:43.091718    8654 logs.go:276] 2 containers: [09e3f6eaf95c 258b4e54effd]
	I0816 05:32:43.091788    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:32:43.102279    8654 logs.go:276] 0 containers: []
	W0816 05:32:43.102290    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:32:43.102351    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:32:43.112572    8654 logs.go:276] 2 containers: [da3ee567efaa e4a387b28249]
	I0816 05:32:43.112588    8654 logs.go:123] Gathering logs for kube-apiserver [7da996bebe3e] ...
	I0816 05:32:43.112594    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da996bebe3e"
	I0816 05:32:43.123462    8654 logs.go:123] Gathering logs for kube-controller-manager [258b4e54effd] ...
	I0816 05:32:43.123479    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258b4e54effd"
	I0816 05:32:43.134920    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:32:43.134931    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:32:43.146828    8654 logs.go:123] Gathering logs for kube-controller-manager [09e3f6eaf95c] ...
	I0816 05:32:43.146836    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e3f6eaf95c"
	I0816 05:32:43.164402    8654 logs.go:123] Gathering logs for storage-provisioner [da3ee567efaa] ...
	I0816 05:32:43.164411    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da3ee567efaa"
	I0816 05:32:43.181345    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:32:43.181355    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:32:43.223719    8654 logs.go:123] Gathering logs for kube-apiserver [1c1df0a24283] ...
	I0816 05:32:43.223727    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c1df0a24283"
	I0816 05:32:43.242246    8654 logs.go:123] Gathering logs for etcd [908e9b841803] ...
	I0816 05:32:43.242256    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 908e9b841803"
	I0816 05:32:43.260839    8654 logs.go:123] Gathering logs for kube-scheduler [be9ff0533784] ...
	I0816 05:32:43.260848    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be9ff0533784"
	I0816 05:32:43.275209    8654 logs.go:123] Gathering logs for coredns [f86c0ca08a29] ...
	I0816 05:32:43.275218    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86c0ca08a29"
	I0816 05:32:43.286929    8654 logs.go:123] Gathering logs for kube-scheduler [82a7160cf6b3] ...
	I0816 05:32:43.286939    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a7160cf6b3"
	I0816 05:32:43.298399    8654 logs.go:123] Gathering logs for storage-provisioner [e4a387b28249] ...
	I0816 05:32:43.298409    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a387b28249"
	I0816 05:32:43.309606    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:32:43.309616    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:32:43.335374    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:32:43.335382    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:32:43.339789    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:32:43.339794    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:32:43.373753    8654 logs.go:123] Gathering logs for etcd [c5598fa8291b] ...
	I0816 05:32:43.373763    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5598fa8291b"
	I0816 05:32:43.389544    8654 logs.go:123] Gathering logs for kube-proxy [41826d2a89be] ...
	I0816 05:32:43.389557    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41826d2a89be"
	I0816 05:32:45.902471    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:32:50.905149    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:32:50.905515    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:32:50.937745    8654 logs.go:276] 2 containers: [1c1df0a24283 7da996bebe3e]
	I0816 05:32:50.937881    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:32:50.957344    8654 logs.go:276] 2 containers: [908e9b841803 c5598fa8291b]
	I0816 05:32:50.957444    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:32:50.971841    8654 logs.go:276] 1 containers: [f86c0ca08a29]
	I0816 05:32:50.971919    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:32:50.984194    8654 logs.go:276] 2 containers: [82a7160cf6b3 be9ff0533784]
	I0816 05:32:50.984261    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:32:50.994789    8654 logs.go:276] 1 containers: [41826d2a89be]
	I0816 05:32:50.994859    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:32:51.005550    8654 logs.go:276] 2 containers: [09e3f6eaf95c 258b4e54effd]
	I0816 05:32:51.005622    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:32:51.016278    8654 logs.go:276] 0 containers: []
	W0816 05:32:51.016288    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:32:51.016346    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:32:51.027251    8654 logs.go:276] 2 containers: [da3ee567efaa e4a387b28249]
	I0816 05:32:51.027268    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:32:51.027274    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:32:51.069013    8654 logs.go:123] Gathering logs for etcd [908e9b841803] ...
	I0816 05:32:51.069024    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 908e9b841803"
	I0816 05:32:51.083226    8654 logs.go:123] Gathering logs for coredns [f86c0ca08a29] ...
	I0816 05:32:51.083237    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86c0ca08a29"
	I0816 05:32:51.110970    8654 logs.go:123] Gathering logs for storage-provisioner [e4a387b28249] ...
	I0816 05:32:51.110981    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a387b28249"
	I0816 05:32:51.123130    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:32:51.123141    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:32:51.134830    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:32:51.134843    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:32:51.139287    8654 logs.go:123] Gathering logs for kube-controller-manager [258b4e54effd] ...
	I0816 05:32:51.139294    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258b4e54effd"
	I0816 05:32:51.151039    8654 logs.go:123] Gathering logs for storage-provisioner [da3ee567efaa] ...
	I0816 05:32:51.151049    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da3ee567efaa"
	I0816 05:32:51.162619    8654 logs.go:123] Gathering logs for etcd [c5598fa8291b] ...
	I0816 05:32:51.162633    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5598fa8291b"
	I0816 05:32:51.173696    8654 logs.go:123] Gathering logs for kube-scheduler [be9ff0533784] ...
	I0816 05:32:51.173710    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be9ff0533784"
	I0816 05:32:51.194009    8654 logs.go:123] Gathering logs for kube-proxy [41826d2a89be] ...
	I0816 05:32:51.194021    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41826d2a89be"
	I0816 05:32:51.213041    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:32:51.213054    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:32:51.248603    8654 logs.go:123] Gathering logs for kube-apiserver [1c1df0a24283] ...
	I0816 05:32:51.248616    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c1df0a24283"
	I0816 05:32:51.263154    8654 logs.go:123] Gathering logs for kube-apiserver [7da996bebe3e] ...
	I0816 05:32:51.263167    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da996bebe3e"
	I0816 05:32:51.274333    8654 logs.go:123] Gathering logs for kube-scheduler [82a7160cf6b3] ...
	I0816 05:32:51.274343    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a7160cf6b3"
	I0816 05:32:51.290192    8654 logs.go:123] Gathering logs for kube-controller-manager [09e3f6eaf95c] ...
	I0816 05:32:51.290206    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e3f6eaf95c"
	I0816 05:32:51.308319    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:32:51.308329    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:32:53.834079    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:32:58.836746    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:32:58.837167    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:32:58.877947    8654 logs.go:276] 2 containers: [1c1df0a24283 7da996bebe3e]
	I0816 05:32:58.878090    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:32:58.905748    8654 logs.go:276] 2 containers: [908e9b841803 c5598fa8291b]
	I0816 05:32:58.905840    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:32:58.919830    8654 logs.go:276] 1 containers: [f86c0ca08a29]
	I0816 05:32:58.919912    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:32:58.932553    8654 logs.go:276] 2 containers: [82a7160cf6b3 be9ff0533784]
	I0816 05:32:58.932627    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:32:58.945923    8654 logs.go:276] 1 containers: [41826d2a89be]
	I0816 05:32:58.945996    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:32:58.957599    8654 logs.go:276] 2 containers: [09e3f6eaf95c 258b4e54effd]
	I0816 05:32:58.957666    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:32:58.968010    8654 logs.go:276] 0 containers: []
	W0816 05:32:58.968021    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:32:58.968073    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:32:58.979304    8654 logs.go:276] 2 containers: [da3ee567efaa e4a387b28249]
	I0816 05:32:58.979320    8654 logs.go:123] Gathering logs for kube-apiserver [7da996bebe3e] ...
	I0816 05:32:58.979325    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da996bebe3e"
	I0816 05:32:58.991159    8654 logs.go:123] Gathering logs for etcd [908e9b841803] ...
	I0816 05:32:58.991172    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 908e9b841803"
	I0816 05:32:59.008731    8654 logs.go:123] Gathering logs for etcd [c5598fa8291b] ...
	I0816 05:32:59.008745    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5598fa8291b"
	I0816 05:32:59.019955    8654 logs.go:123] Gathering logs for coredns [f86c0ca08a29] ...
	I0816 05:32:59.019970    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86c0ca08a29"
	I0816 05:32:59.031407    8654 logs.go:123] Gathering logs for kube-scheduler [be9ff0533784] ...
	I0816 05:32:59.031418    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be9ff0533784"
	I0816 05:32:59.046934    8654 logs.go:123] Gathering logs for kube-proxy [41826d2a89be] ...
	I0816 05:32:59.046943    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41826d2a89be"
	I0816 05:32:59.059439    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:32:59.059450    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:32:59.084782    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:32:59.084789    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:32:59.121580    8654 logs.go:123] Gathering logs for kube-controller-manager [09e3f6eaf95c] ...
	I0816 05:32:59.121591    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e3f6eaf95c"
	I0816 05:32:59.138827    8654 logs.go:123] Gathering logs for kube-controller-manager [258b4e54effd] ...
	I0816 05:32:59.138837    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258b4e54effd"
	I0816 05:32:59.150843    8654 logs.go:123] Gathering logs for storage-provisioner [da3ee567efaa] ...
	I0816 05:32:59.150854    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da3ee567efaa"
	I0816 05:32:59.162415    8654 logs.go:123] Gathering logs for storage-provisioner [e4a387b28249] ...
	I0816 05:32:59.162426    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a387b28249"
	I0816 05:32:59.174268    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:32:59.174277    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:32:59.217645    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:32:59.217657    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:32:59.222765    8654 logs.go:123] Gathering logs for kube-apiserver [1c1df0a24283] ...
	I0816 05:32:59.222774    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c1df0a24283"
	I0816 05:32:59.236794    8654 logs.go:123] Gathering logs for kube-scheduler [82a7160cf6b3] ...
	I0816 05:32:59.236805    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a7160cf6b3"
	I0816 05:32:59.248839    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:32:59.248853    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:33:01.764215    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:33:06.767107    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:33:06.767564    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:33:06.806774    8654 logs.go:276] 2 containers: [1c1df0a24283 7da996bebe3e]
	I0816 05:33:06.806892    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:33:06.828362    8654 logs.go:276] 2 containers: [908e9b841803 c5598fa8291b]
	I0816 05:33:06.828459    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:33:06.843132    8654 logs.go:276] 1 containers: [f86c0ca08a29]
	I0816 05:33:06.843217    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:33:06.855803    8654 logs.go:276] 2 containers: [82a7160cf6b3 be9ff0533784]
	I0816 05:33:06.855868    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:33:06.866932    8654 logs.go:276] 1 containers: [41826d2a89be]
	I0816 05:33:06.866994    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:33:06.877912    8654 logs.go:276] 2 containers: [09e3f6eaf95c 258b4e54effd]
	I0816 05:33:06.877984    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:33:06.895636    8654 logs.go:276] 0 containers: []
	W0816 05:33:06.895653    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:33:06.895716    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:33:06.906311    8654 logs.go:276] 2 containers: [da3ee567efaa e4a387b28249]
	I0816 05:33:06.906330    8654 logs.go:123] Gathering logs for kube-scheduler [be9ff0533784] ...
	I0816 05:33:06.906335    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be9ff0533784"
	I0816 05:33:06.924855    8654 logs.go:123] Gathering logs for kube-controller-manager [258b4e54effd] ...
	I0816 05:33:06.924865    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258b4e54effd"
	I0816 05:33:06.936317    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:33:06.936327    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:33:06.962230    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:33:06.962237    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:33:07.004037    8654 logs.go:123] Gathering logs for etcd [908e9b841803] ...
	I0816 05:33:07.004044    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 908e9b841803"
	I0816 05:33:07.017860    8654 logs.go:123] Gathering logs for coredns [f86c0ca08a29] ...
	I0816 05:33:07.017870    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86c0ca08a29"
	I0816 05:33:07.029252    8654 logs.go:123] Gathering logs for kube-proxy [41826d2a89be] ...
	I0816 05:33:07.029264    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41826d2a89be"
	I0816 05:33:07.042863    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:33:07.042874    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:33:07.054644    8654 logs.go:123] Gathering logs for kube-apiserver [1c1df0a24283] ...
	I0816 05:33:07.054655    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c1df0a24283"
	I0816 05:33:07.068530    8654 logs.go:123] Gathering logs for kube-apiserver [7da996bebe3e] ...
	I0816 05:33:07.068542    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da996bebe3e"
	I0816 05:33:07.083738    8654 logs.go:123] Gathering logs for etcd [c5598fa8291b] ...
	I0816 05:33:07.083748    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5598fa8291b"
	I0816 05:33:07.094518    8654 logs.go:123] Gathering logs for kube-scheduler [82a7160cf6b3] ...
	I0816 05:33:07.094534    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a7160cf6b3"
	I0816 05:33:07.105888    8654 logs.go:123] Gathering logs for kube-controller-manager [09e3f6eaf95c] ...
	I0816 05:33:07.105899    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e3f6eaf95c"
	I0816 05:33:07.122886    8654 logs.go:123] Gathering logs for storage-provisioner [da3ee567efaa] ...
	I0816 05:33:07.122896    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da3ee567efaa"
	I0816 05:33:07.134451    8654 logs.go:123] Gathering logs for storage-provisioner [e4a387b28249] ...
	I0816 05:33:07.134461    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a387b28249"
	I0816 05:33:07.145794    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:33:07.145804    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:33:07.150767    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:33:07.150772    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:33:09.688026    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:33:14.690809    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:33:14.691234    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:33:14.734406    8654 logs.go:276] 2 containers: [1c1df0a24283 7da996bebe3e]
	I0816 05:33:14.734546    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:33:14.756550    8654 logs.go:276] 2 containers: [908e9b841803 c5598fa8291b]
	I0816 05:33:14.756670    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:33:14.774710    8654 logs.go:276] 1 containers: [f86c0ca08a29]
	I0816 05:33:14.774785    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:33:14.786756    8654 logs.go:276] 2 containers: [82a7160cf6b3 be9ff0533784]
	I0816 05:33:14.786852    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:33:14.800547    8654 logs.go:276] 1 containers: [41826d2a89be]
	I0816 05:33:14.800619    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:33:14.813974    8654 logs.go:276] 2 containers: [09e3f6eaf95c 258b4e54effd]
	I0816 05:33:14.814049    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:33:14.823420    8654 logs.go:276] 0 containers: []
	W0816 05:33:14.823432    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:33:14.823493    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:33:14.834087    8654 logs.go:276] 2 containers: [da3ee567efaa e4a387b28249]
	I0816 05:33:14.834106    8654 logs.go:123] Gathering logs for kube-apiserver [7da996bebe3e] ...
	I0816 05:33:14.834112    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da996bebe3e"
	I0816 05:33:14.845606    8654 logs.go:123] Gathering logs for etcd [c5598fa8291b] ...
	I0816 05:33:14.845622    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5598fa8291b"
	I0816 05:33:14.857368    8654 logs.go:123] Gathering logs for kube-scheduler [82a7160cf6b3] ...
	I0816 05:33:14.857381    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a7160cf6b3"
	I0816 05:33:14.868867    8654 logs.go:123] Gathering logs for storage-provisioner [e4a387b28249] ...
	I0816 05:33:14.868877    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a387b28249"
	I0816 05:33:14.882635    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:33:14.882648    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:33:14.887267    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:33:14.887274    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:33:14.921787    8654 logs.go:123] Gathering logs for kube-controller-manager [258b4e54effd] ...
	I0816 05:33:14.921797    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258b4e54effd"
	I0816 05:33:14.933311    8654 logs.go:123] Gathering logs for storage-provisioner [da3ee567efaa] ...
	I0816 05:33:14.933321    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da3ee567efaa"
	I0816 05:33:14.944584    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:33:14.944593    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:33:14.968799    8654 logs.go:123] Gathering logs for etcd [908e9b841803] ...
	I0816 05:33:14.968807    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 908e9b841803"
	I0816 05:33:14.983073    8654 logs.go:123] Gathering logs for coredns [f86c0ca08a29] ...
	I0816 05:33:14.983083    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86c0ca08a29"
	I0816 05:33:14.996717    8654 logs.go:123] Gathering logs for kube-controller-manager [09e3f6eaf95c] ...
	I0816 05:33:14.996728    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e3f6eaf95c"
	I0816 05:33:15.013585    8654 logs.go:123] Gathering logs for kube-scheduler [be9ff0533784] ...
	I0816 05:33:15.013594    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be9ff0533784"
	I0816 05:33:15.028240    8654 logs.go:123] Gathering logs for kube-proxy [41826d2a89be] ...
	I0816 05:33:15.028249    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41826d2a89be"
	I0816 05:33:15.040106    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:33:15.040116    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:33:15.051635    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:33:15.051646    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:33:15.092797    8654 logs.go:123] Gathering logs for kube-apiserver [1c1df0a24283] ...
	I0816 05:33:15.092805    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c1df0a24283"
	I0816 05:33:17.608984    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:33:22.611736    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:33:22.612068    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:33:22.644844    8654 logs.go:276] 2 containers: [1c1df0a24283 7da996bebe3e]
	I0816 05:33:22.644971    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:33:22.664048    8654 logs.go:276] 2 containers: [908e9b841803 c5598fa8291b]
	I0816 05:33:22.664140    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:33:22.678110    8654 logs.go:276] 1 containers: [f86c0ca08a29]
	I0816 05:33:22.678182    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:33:22.690348    8654 logs.go:276] 2 containers: [82a7160cf6b3 be9ff0533784]
	I0816 05:33:22.690416    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:33:22.701370    8654 logs.go:276] 1 containers: [41826d2a89be]
	I0816 05:33:22.701440    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:33:22.712073    8654 logs.go:276] 2 containers: [09e3f6eaf95c 258b4e54effd]
	I0816 05:33:22.712135    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:33:22.725493    8654 logs.go:276] 0 containers: []
	W0816 05:33:22.725505    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:33:22.725567    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:33:22.737951    8654 logs.go:276] 2 containers: [da3ee567efaa e4a387b28249]
	I0816 05:33:22.737969    8654 logs.go:123] Gathering logs for kube-apiserver [1c1df0a24283] ...
	I0816 05:33:22.737975    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c1df0a24283"
	I0816 05:33:22.751838    8654 logs.go:123] Gathering logs for kube-apiserver [7da996bebe3e] ...
	I0816 05:33:22.751848    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da996bebe3e"
	I0816 05:33:22.763091    8654 logs.go:123] Gathering logs for storage-provisioner [e4a387b28249] ...
	I0816 05:33:22.763104    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a387b28249"
	I0816 05:33:22.774587    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:33:22.774597    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:33:22.800899    8654 logs.go:123] Gathering logs for storage-provisioner [da3ee567efaa] ...
	I0816 05:33:22.800906    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da3ee567efaa"
	I0816 05:33:22.812066    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:33:22.812076    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:33:22.823797    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:33:22.823806    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:33:22.827867    8654 logs.go:123] Gathering logs for kube-scheduler [82a7160cf6b3] ...
	I0816 05:33:22.827874    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a7160cf6b3"
	I0816 05:33:22.842783    8654 logs.go:123] Gathering logs for kube-proxy [41826d2a89be] ...
	I0816 05:33:22.842792    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41826d2a89be"
	I0816 05:33:22.854706    8654 logs.go:123] Gathering logs for kube-controller-manager [258b4e54effd] ...
	I0816 05:33:22.854717    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258b4e54effd"
	I0816 05:33:22.866763    8654 logs.go:123] Gathering logs for etcd [c5598fa8291b] ...
	I0816 05:33:22.866772    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5598fa8291b"
	I0816 05:33:22.877812    8654 logs.go:123] Gathering logs for coredns [f86c0ca08a29] ...
	I0816 05:33:22.877825    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86c0ca08a29"
	I0816 05:33:22.890519    8654 logs.go:123] Gathering logs for kube-controller-manager [09e3f6eaf95c] ...
	I0816 05:33:22.890531    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e3f6eaf95c"
	I0816 05:33:22.912719    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:33:22.912728    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:33:22.954604    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:33:22.954615    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:33:22.991427    8654 logs.go:123] Gathering logs for etcd [908e9b841803] ...
	I0816 05:33:22.991439    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 908e9b841803"
	I0816 05:33:23.005956    8654 logs.go:123] Gathering logs for kube-scheduler [be9ff0533784] ...
	I0816 05:33:23.005966    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be9ff0533784"
	I0816 05:33:25.522595    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:33:30.524800    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:33:30.525123    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:33:30.550480    8654 logs.go:276] 2 containers: [1c1df0a24283 7da996bebe3e]
	I0816 05:33:30.550554    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:33:30.564822    8654 logs.go:276] 2 containers: [908e9b841803 c5598fa8291b]
	I0816 05:33:30.564890    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:33:30.575849    8654 logs.go:276] 1 containers: [f86c0ca08a29]
	I0816 05:33:30.575915    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:33:30.586476    8654 logs.go:276] 2 containers: [82a7160cf6b3 be9ff0533784]
	I0816 05:33:30.586549    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:33:30.597382    8654 logs.go:276] 1 containers: [41826d2a89be]
	I0816 05:33:30.597469    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:33:30.608822    8654 logs.go:276] 2 containers: [09e3f6eaf95c 258b4e54effd]
	I0816 05:33:30.608887    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:33:30.618446    8654 logs.go:276] 0 containers: []
	W0816 05:33:30.618458    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:33:30.618509    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:33:30.629181    8654 logs.go:276] 2 containers: [da3ee567efaa e4a387b28249]
	I0816 05:33:30.629199    8654 logs.go:123] Gathering logs for etcd [c5598fa8291b] ...
	I0816 05:33:30.629204    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5598fa8291b"
	I0816 05:33:30.645046    8654 logs.go:123] Gathering logs for kube-scheduler [82a7160cf6b3] ...
	I0816 05:33:30.645060    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a7160cf6b3"
	I0816 05:33:30.656744    8654 logs.go:123] Gathering logs for kube-apiserver [7da996bebe3e] ...
	I0816 05:33:30.656753    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da996bebe3e"
	I0816 05:33:30.667847    8654 logs.go:123] Gathering logs for kube-controller-manager [258b4e54effd] ...
	I0816 05:33:30.667859    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258b4e54effd"
	I0816 05:33:30.680255    8654 logs.go:123] Gathering logs for storage-provisioner [da3ee567efaa] ...
	I0816 05:33:30.680268    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da3ee567efaa"
	I0816 05:33:30.691964    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:33:30.691974    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:33:30.735595    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:33:30.735605    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:33:30.739962    8654 logs.go:123] Gathering logs for kube-proxy [41826d2a89be] ...
	I0816 05:33:30.739972    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41826d2a89be"
	I0816 05:33:30.751577    8654 logs.go:123] Gathering logs for kube-controller-manager [09e3f6eaf95c] ...
	I0816 05:33:30.751588    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e3f6eaf95c"
	I0816 05:33:30.769243    8654 logs.go:123] Gathering logs for storage-provisioner [e4a387b28249] ...
	I0816 05:33:30.769254    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a387b28249"
	I0816 05:33:30.785355    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:33:30.785365    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:33:30.847128    8654 logs.go:123] Gathering logs for kube-apiserver [1c1df0a24283] ...
	I0816 05:33:30.847139    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c1df0a24283"
	I0816 05:33:30.862292    8654 logs.go:123] Gathering logs for etcd [908e9b841803] ...
	I0816 05:33:30.862303    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 908e9b841803"
	I0816 05:33:30.876607    8654 logs.go:123] Gathering logs for coredns [f86c0ca08a29] ...
	I0816 05:33:30.876619    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86c0ca08a29"
	I0816 05:33:30.890035    8654 logs.go:123] Gathering logs for kube-scheduler [be9ff0533784] ...
	I0816 05:33:30.890045    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be9ff0533784"
	I0816 05:33:30.905729    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:33:30.905739    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:33:30.930543    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:33:30.930552    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:33:33.444777    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:33:38.447486    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:33:38.447669    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:33:38.459774    8654 logs.go:276] 2 containers: [1c1df0a24283 7da996bebe3e]
	I0816 05:33:38.459849    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:33:38.470804    8654 logs.go:276] 2 containers: [908e9b841803 c5598fa8291b]
	I0816 05:33:38.470895    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:33:38.493860    8654 logs.go:276] 1 containers: [f86c0ca08a29]
	I0816 05:33:38.493933    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:33:38.504629    8654 logs.go:276] 2 containers: [82a7160cf6b3 be9ff0533784]
	I0816 05:33:38.504701    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:33:38.514929    8654 logs.go:276] 1 containers: [41826d2a89be]
	I0816 05:33:38.514995    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:33:38.525911    8654 logs.go:276] 2 containers: [09e3f6eaf95c 258b4e54effd]
	I0816 05:33:38.525987    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:33:38.536078    8654 logs.go:276] 0 containers: []
	W0816 05:33:38.536091    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:33:38.536148    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:33:38.546906    8654 logs.go:276] 2 containers: [da3ee567efaa e4a387b28249]
	I0816 05:33:38.546921    8654 logs.go:123] Gathering logs for kube-apiserver [7da996bebe3e] ...
	I0816 05:33:38.546925    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da996bebe3e"
	I0816 05:33:38.564948    8654 logs.go:123] Gathering logs for coredns [f86c0ca08a29] ...
	I0816 05:33:38.564961    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86c0ca08a29"
	I0816 05:33:38.577253    8654 logs.go:123] Gathering logs for kube-proxy [41826d2a89be] ...
	I0816 05:33:38.577264    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41826d2a89be"
	I0816 05:33:38.592069    8654 logs.go:123] Gathering logs for kube-controller-manager [258b4e54effd] ...
	I0816 05:33:38.592081    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258b4e54effd"
	I0816 05:33:38.614095    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:33:38.614105    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:33:38.658289    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:33:38.658296    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:33:38.697065    8654 logs.go:123] Gathering logs for kube-apiserver [1c1df0a24283] ...
	I0816 05:33:38.697077    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c1df0a24283"
	I0816 05:33:38.714783    8654 logs.go:123] Gathering logs for etcd [c5598fa8291b] ...
	I0816 05:33:38.714792    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5598fa8291b"
	I0816 05:33:38.726090    8654 logs.go:123] Gathering logs for kube-controller-manager [09e3f6eaf95c] ...
	I0816 05:33:38.726101    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e3f6eaf95c"
	I0816 05:33:38.745850    8654 logs.go:123] Gathering logs for storage-provisioner [e4a387b28249] ...
	I0816 05:33:38.745860    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a387b28249"
	I0816 05:33:38.757548    8654 logs.go:123] Gathering logs for kube-scheduler [82a7160cf6b3] ...
	I0816 05:33:38.757563    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a7160cf6b3"
	I0816 05:33:38.769858    8654 logs.go:123] Gathering logs for kube-scheduler [be9ff0533784] ...
	I0816 05:33:38.769872    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be9ff0533784"
	I0816 05:33:38.788568    8654 logs.go:123] Gathering logs for storage-provisioner [da3ee567efaa] ...
	I0816 05:33:38.788579    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da3ee567efaa"
	I0816 05:33:38.800485    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:33:38.800500    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:33:38.812335    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:33:38.812346    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:33:38.816661    8654 logs.go:123] Gathering logs for etcd [908e9b841803] ...
	I0816 05:33:38.816668    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 908e9b841803"
	I0816 05:33:38.830656    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:33:38.830664    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:33:41.357627    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:33:46.359824    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:33:46.359977    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:33:46.376500    8654 logs.go:276] 2 containers: [1c1df0a24283 7da996bebe3e]
	I0816 05:33:46.376582    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:33:46.389368    8654 logs.go:276] 2 containers: [908e9b841803 c5598fa8291b]
	I0816 05:33:46.389429    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:33:46.400413    8654 logs.go:276] 1 containers: [f86c0ca08a29]
	I0816 05:33:46.400488    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:33:46.410598    8654 logs.go:276] 2 containers: [82a7160cf6b3 be9ff0533784]
	I0816 05:33:46.410665    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:33:46.421647    8654 logs.go:276] 1 containers: [41826d2a89be]
	I0816 05:33:46.421709    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:33:46.431801    8654 logs.go:276] 2 containers: [09e3f6eaf95c 258b4e54effd]
	I0816 05:33:46.431872    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:33:46.442071    8654 logs.go:276] 0 containers: []
	W0816 05:33:46.442082    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:33:46.442143    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:33:46.452940    8654 logs.go:276] 2 containers: [da3ee567efaa e4a387b28249]
	I0816 05:33:46.452956    8654 logs.go:123] Gathering logs for etcd [c5598fa8291b] ...
	I0816 05:33:46.452961    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5598fa8291b"
	I0816 05:33:46.463826    8654 logs.go:123] Gathering logs for kube-proxy [41826d2a89be] ...
	I0816 05:33:46.463840    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41826d2a89be"
	I0816 05:33:46.475489    8654 logs.go:123] Gathering logs for storage-provisioner [da3ee567efaa] ...
	I0816 05:33:46.475500    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da3ee567efaa"
	I0816 05:33:46.487247    8654 logs.go:123] Gathering logs for storage-provisioner [e4a387b28249] ...
	I0816 05:33:46.487258    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a387b28249"
	I0816 05:33:46.504405    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:33:46.504414    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:33:46.546659    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:33:46.546668    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:33:46.551140    8654 logs.go:123] Gathering logs for kube-apiserver [7da996bebe3e] ...
	I0816 05:33:46.551145    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da996bebe3e"
	I0816 05:33:46.562331    8654 logs.go:123] Gathering logs for kube-scheduler [82a7160cf6b3] ...
	I0816 05:33:46.562343    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a7160cf6b3"
	I0816 05:33:46.573973    8654 logs.go:123] Gathering logs for kube-controller-manager [258b4e54effd] ...
	I0816 05:33:46.573984    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258b4e54effd"
	I0816 05:33:46.585371    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:33:46.585380    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:33:46.610908    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:33:46.610916    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:33:46.656858    8654 logs.go:123] Gathering logs for kube-scheduler [be9ff0533784] ...
	I0816 05:33:46.656870    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be9ff0533784"
	I0816 05:33:46.671888    8654 logs.go:123] Gathering logs for coredns [f86c0ca08a29] ...
	I0816 05:33:46.671896    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86c0ca08a29"
	I0816 05:33:46.685643    8654 logs.go:123] Gathering logs for kube-controller-manager [09e3f6eaf95c] ...
	I0816 05:33:46.685655    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e3f6eaf95c"
	I0816 05:33:46.710072    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:33:46.710085    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:33:46.721935    8654 logs.go:123] Gathering logs for kube-apiserver [1c1df0a24283] ...
	I0816 05:33:46.721947    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c1df0a24283"
	I0816 05:33:46.736245    8654 logs.go:123] Gathering logs for etcd [908e9b841803] ...
	I0816 05:33:46.736256    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 908e9b841803"
	I0816 05:33:49.256467    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:33:54.259073    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:33:54.259194    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:33:54.270771    8654 logs.go:276] 2 containers: [1c1df0a24283 7da996bebe3e]
	I0816 05:33:54.270855    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:33:54.283147    8654 logs.go:276] 2 containers: [908e9b841803 c5598fa8291b]
	I0816 05:33:54.283230    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:33:54.296954    8654 logs.go:276] 1 containers: [f86c0ca08a29]
	I0816 05:33:54.297033    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:33:54.309019    8654 logs.go:276] 2 containers: [82a7160cf6b3 be9ff0533784]
	I0816 05:33:54.309093    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:33:54.320966    8654 logs.go:276] 1 containers: [41826d2a89be]
	I0816 05:33:54.321040    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:33:54.333887    8654 logs.go:276] 2 containers: [09e3f6eaf95c 258b4e54effd]
	I0816 05:33:54.333958    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:33:54.347219    8654 logs.go:276] 0 containers: []
	W0816 05:33:54.347232    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:33:54.347306    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:33:54.363034    8654 logs.go:276] 2 containers: [da3ee567efaa e4a387b28249]
	I0816 05:33:54.363054    8654 logs.go:123] Gathering logs for kube-apiserver [7da996bebe3e] ...
	I0816 05:33:54.363061    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da996bebe3e"
	I0816 05:33:54.376247    8654 logs.go:123] Gathering logs for etcd [c5598fa8291b] ...
	I0816 05:33:54.376261    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5598fa8291b"
	I0816 05:33:54.389512    8654 logs.go:123] Gathering logs for coredns [f86c0ca08a29] ...
	I0816 05:33:54.389526    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86c0ca08a29"
	I0816 05:33:54.402871    8654 logs.go:123] Gathering logs for kube-proxy [41826d2a89be] ...
	I0816 05:33:54.402886    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41826d2a89be"
	I0816 05:33:54.420831    8654 logs.go:123] Gathering logs for kube-controller-manager [09e3f6eaf95c] ...
	I0816 05:33:54.420842    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e3f6eaf95c"
	I0816 05:33:54.441327    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:33:54.441341    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:33:54.446449    8654 logs.go:123] Gathering logs for kube-scheduler [be9ff0533784] ...
	I0816 05:33:54.446461    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be9ff0533784"
	I0816 05:33:54.462420    8654 logs.go:123] Gathering logs for storage-provisioner [da3ee567efaa] ...
	I0816 05:33:54.462431    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da3ee567efaa"
	I0816 05:33:54.482177    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:33:54.482188    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:33:54.527086    8654 logs.go:123] Gathering logs for kube-apiserver [1c1df0a24283] ...
	I0816 05:33:54.527101    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c1df0a24283"
	I0816 05:33:54.544989    8654 logs.go:123] Gathering logs for kube-controller-manager [258b4e54effd] ...
	I0816 05:33:54.545011    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258b4e54effd"
	I0816 05:33:54.560692    8654 logs.go:123] Gathering logs for storage-provisioner [e4a387b28249] ...
	I0816 05:33:54.560705    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a387b28249"
	I0816 05:33:54.574123    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:33:54.574138    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:33:54.587516    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:33:54.587528    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:33:54.627149    8654 logs.go:123] Gathering logs for etcd [908e9b841803] ...
	I0816 05:33:54.627163    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 908e9b841803"
	I0816 05:33:54.644160    8654 logs.go:123] Gathering logs for kube-scheduler [82a7160cf6b3] ...
	I0816 05:33:54.644182    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a7160cf6b3"
	I0816 05:33:54.658404    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:33:54.658418    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:33:57.190245    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:34:02.192487    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:34:02.192683    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:34:02.209058    8654 logs.go:276] 2 containers: [1c1df0a24283 7da996bebe3e]
	I0816 05:34:02.209152    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:34:02.223124    8654 logs.go:276] 2 containers: [908e9b841803 c5598fa8291b]
	I0816 05:34:02.223204    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:34:02.234479    8654 logs.go:276] 1 containers: [f86c0ca08a29]
	I0816 05:34:02.234544    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:34:02.244795    8654 logs.go:276] 2 containers: [82a7160cf6b3 be9ff0533784]
	I0816 05:34:02.244865    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:34:02.255104    8654 logs.go:276] 1 containers: [41826d2a89be]
	I0816 05:34:02.255169    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:34:02.265547    8654 logs.go:276] 2 containers: [09e3f6eaf95c 258b4e54effd]
	I0816 05:34:02.265627    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:34:02.275467    8654 logs.go:276] 0 containers: []
	W0816 05:34:02.275481    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:34:02.275546    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:34:02.290790    8654 logs.go:276] 2 containers: [da3ee567efaa e4a387b28249]
	I0816 05:34:02.290808    8654 logs.go:123] Gathering logs for kube-scheduler [be9ff0533784] ...
	I0816 05:34:02.290814    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be9ff0533784"
	I0816 05:34:02.306439    8654 logs.go:123] Gathering logs for kube-controller-manager [258b4e54effd] ...
	I0816 05:34:02.306450    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258b4e54effd"
	I0816 05:34:02.321557    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:34:02.321570    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:34:02.333803    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:34:02.333814    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:34:02.357743    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:34:02.357750    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:34:02.398433    8654 logs.go:123] Gathering logs for kube-scheduler [82a7160cf6b3] ...
	I0816 05:34:02.398440    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a7160cf6b3"
	I0816 05:34:02.409695    8654 logs.go:123] Gathering logs for kube-proxy [41826d2a89be] ...
	I0816 05:34:02.409705    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41826d2a89be"
	I0816 05:34:02.421698    8654 logs.go:123] Gathering logs for kube-controller-manager [09e3f6eaf95c] ...
	I0816 05:34:02.421711    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e3f6eaf95c"
	I0816 05:34:02.439398    8654 logs.go:123] Gathering logs for storage-provisioner [da3ee567efaa] ...
	I0816 05:34:02.439411    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da3ee567efaa"
	I0816 05:34:02.455187    8654 logs.go:123] Gathering logs for storage-provisioner [e4a387b28249] ...
	I0816 05:34:02.455201    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a387b28249"
	I0816 05:34:02.466919    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:34:02.466929    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:34:02.471177    8654 logs.go:123] Gathering logs for kube-apiserver [1c1df0a24283] ...
	I0816 05:34:02.471186    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c1df0a24283"
	I0816 05:34:02.485675    8654 logs.go:123] Gathering logs for kube-apiserver [7da996bebe3e] ...
	I0816 05:34:02.485688    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da996bebe3e"
	I0816 05:34:02.497349    8654 logs.go:123] Gathering logs for coredns [f86c0ca08a29] ...
	I0816 05:34:02.497361    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86c0ca08a29"
	I0816 05:34:02.508158    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:34:02.508168    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:34:02.547172    8654 logs.go:123] Gathering logs for etcd [908e9b841803] ...
	I0816 05:34:02.547186    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 908e9b841803"
	I0816 05:34:02.561404    8654 logs.go:123] Gathering logs for etcd [c5598fa8291b] ...
	I0816 05:34:02.561417    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5598fa8291b"
	I0816 05:34:05.074844    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:34:10.076954    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:34:10.077095    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:34:10.088315    8654 logs.go:276] 2 containers: [1c1df0a24283 7da996bebe3e]
	I0816 05:34:10.088385    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:34:10.099159    8654 logs.go:276] 2 containers: [908e9b841803 c5598fa8291b]
	I0816 05:34:10.099230    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:34:10.110961    8654 logs.go:276] 1 containers: [f86c0ca08a29]
	I0816 05:34:10.111028    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:34:10.121668    8654 logs.go:276] 2 containers: [82a7160cf6b3 be9ff0533784]
	I0816 05:34:10.121739    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:34:10.132106    8654 logs.go:276] 1 containers: [41826d2a89be]
	I0816 05:34:10.132180    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:34:10.143030    8654 logs.go:276] 2 containers: [09e3f6eaf95c 258b4e54effd]
	I0816 05:34:10.143097    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:34:10.154277    8654 logs.go:276] 0 containers: []
	W0816 05:34:10.154289    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:34:10.154353    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:34:10.164990    8654 logs.go:276] 2 containers: [da3ee567efaa e4a387b28249]
	I0816 05:34:10.165007    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:34:10.165014    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:34:10.190858    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:34:10.190880    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:34:10.232792    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:34:10.232803    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:34:10.268994    8654 logs.go:123] Gathering logs for kube-apiserver [7da996bebe3e] ...
	I0816 05:34:10.269005    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da996bebe3e"
	I0816 05:34:10.284445    8654 logs.go:123] Gathering logs for kube-proxy [41826d2a89be] ...
	I0816 05:34:10.284462    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41826d2a89be"
	I0816 05:34:10.296082    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:34:10.296095    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:34:10.309096    8654 logs.go:123] Gathering logs for kube-apiserver [1c1df0a24283] ...
	I0816 05:34:10.309107    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c1df0a24283"
	I0816 05:34:10.323439    8654 logs.go:123] Gathering logs for etcd [c5598fa8291b] ...
	I0816 05:34:10.323450    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5598fa8291b"
	I0816 05:34:10.334581    8654 logs.go:123] Gathering logs for coredns [f86c0ca08a29] ...
	I0816 05:34:10.334595    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86c0ca08a29"
	I0816 05:34:10.346103    8654 logs.go:123] Gathering logs for kube-controller-manager [09e3f6eaf95c] ...
	I0816 05:34:10.346115    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e3f6eaf95c"
	I0816 05:34:10.363728    8654 logs.go:123] Gathering logs for kube-scheduler [be9ff0533784] ...
	I0816 05:34:10.363739    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be9ff0533784"
	I0816 05:34:10.379257    8654 logs.go:123] Gathering logs for kube-controller-manager [258b4e54effd] ...
	I0816 05:34:10.379267    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258b4e54effd"
	I0816 05:34:10.390717    8654 logs.go:123] Gathering logs for storage-provisioner [da3ee567efaa] ...
	I0816 05:34:10.390730    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da3ee567efaa"
	I0816 05:34:10.402003    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:34:10.402016    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:34:10.406644    8654 logs.go:123] Gathering logs for etcd [908e9b841803] ...
	I0816 05:34:10.406651    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 908e9b841803"
	I0816 05:34:10.420264    8654 logs.go:123] Gathering logs for kube-scheduler [82a7160cf6b3] ...
	I0816 05:34:10.420274    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a7160cf6b3"
	I0816 05:34:10.432003    8654 logs.go:123] Gathering logs for storage-provisioner [e4a387b28249] ...
	I0816 05:34:10.432017    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a387b28249"
	I0816 05:34:12.950204    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:34:17.952538    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:34:17.952705    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:34:17.968864    8654 logs.go:276] 2 containers: [1c1df0a24283 7da996bebe3e]
	I0816 05:34:17.968944    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:34:17.980527    8654 logs.go:276] 2 containers: [908e9b841803 c5598fa8291b]
	I0816 05:34:17.980594    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:34:17.990628    8654 logs.go:276] 1 containers: [f86c0ca08a29]
	I0816 05:34:17.990694    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:34:18.001347    8654 logs.go:276] 2 containers: [82a7160cf6b3 be9ff0533784]
	I0816 05:34:18.001420    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:34:18.011796    8654 logs.go:276] 1 containers: [41826d2a89be]
	I0816 05:34:18.011869    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:34:18.022173    8654 logs.go:276] 2 containers: [09e3f6eaf95c 258b4e54effd]
	I0816 05:34:18.022234    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:34:18.032018    8654 logs.go:276] 0 containers: []
	W0816 05:34:18.032031    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:34:18.032084    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:34:18.047468    8654 logs.go:276] 2 containers: [da3ee567efaa e4a387b28249]
	I0816 05:34:18.047486    8654 logs.go:123] Gathering logs for kube-apiserver [1c1df0a24283] ...
	I0816 05:34:18.047491    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c1df0a24283"
	I0816 05:34:18.061810    8654 logs.go:123] Gathering logs for kube-proxy [41826d2a89be] ...
	I0816 05:34:18.061821    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41826d2a89be"
	I0816 05:34:18.073563    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:34:18.073576    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:34:18.115157    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:34:18.115169    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:34:18.155976    8654 logs.go:123] Gathering logs for coredns [f86c0ca08a29] ...
	I0816 05:34:18.155988    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86c0ca08a29"
	I0816 05:34:18.167286    8654 logs.go:123] Gathering logs for kube-controller-manager [258b4e54effd] ...
	I0816 05:34:18.167297    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258b4e54effd"
	I0816 05:34:18.178794    8654 logs.go:123] Gathering logs for kube-scheduler [82a7160cf6b3] ...
	I0816 05:34:18.178805    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a7160cf6b3"
	I0816 05:34:18.190662    8654 logs.go:123] Gathering logs for kube-scheduler [be9ff0533784] ...
	I0816 05:34:18.190677    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be9ff0533784"
	I0816 05:34:18.206657    8654 logs.go:123] Gathering logs for storage-provisioner [da3ee567efaa] ...
	I0816 05:34:18.206671    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da3ee567efaa"
	I0816 05:34:18.225576    8654 logs.go:123] Gathering logs for storage-provisioner [e4a387b28249] ...
	I0816 05:34:18.225586    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a387b28249"
	I0816 05:34:18.239958    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:34:18.239972    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:34:18.251862    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:34:18.251877    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:34:18.256491    8654 logs.go:123] Gathering logs for kube-apiserver [7da996bebe3e] ...
	I0816 05:34:18.256499    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da996bebe3e"
	I0816 05:34:18.267735    8654 logs.go:123] Gathering logs for etcd [908e9b841803] ...
	I0816 05:34:18.267748    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 908e9b841803"
	I0816 05:34:18.285762    8654 logs.go:123] Gathering logs for etcd [c5598fa8291b] ...
	I0816 05:34:18.285773    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5598fa8291b"
	I0816 05:34:18.300819    8654 logs.go:123] Gathering logs for kube-controller-manager [09e3f6eaf95c] ...
	I0816 05:34:18.300831    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e3f6eaf95c"
	I0816 05:34:18.317908    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:34:18.317918    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:34:20.843393    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:34:25.846297    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:34:25.846927    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:34:25.889888    8654 logs.go:276] 2 containers: [1c1df0a24283 7da996bebe3e]
	I0816 05:34:25.890027    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:34:25.911384    8654 logs.go:276] 2 containers: [908e9b841803 c5598fa8291b]
	I0816 05:34:25.911499    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:34:25.926780    8654 logs.go:276] 1 containers: [f86c0ca08a29]
	I0816 05:34:25.926859    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:34:25.938911    8654 logs.go:276] 2 containers: [82a7160cf6b3 be9ff0533784]
	I0816 05:34:25.938983    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:34:25.959005    8654 logs.go:276] 1 containers: [41826d2a89be]
	I0816 05:34:25.959083    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:34:25.978874    8654 logs.go:276] 2 containers: [09e3f6eaf95c 258b4e54effd]
	I0816 05:34:25.978956    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:34:25.994223    8654 logs.go:276] 0 containers: []
	W0816 05:34:25.994238    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:34:25.994297    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:34:26.009841    8654 logs.go:276] 2 containers: [da3ee567efaa e4a387b28249]
	I0816 05:34:26.009861    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:34:26.009867    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:34:26.051998    8654 logs.go:123] Gathering logs for kube-controller-manager [09e3f6eaf95c] ...
	I0816 05:34:26.052011    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e3f6eaf95c"
	I0816 05:34:26.070403    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:34:26.070414    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:34:26.081974    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:34:26.081988    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:34:26.086508    8654 logs.go:123] Gathering logs for coredns [f86c0ca08a29] ...
	I0816 05:34:26.086518    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86c0ca08a29"
	I0816 05:34:26.102979    8654 logs.go:123] Gathering logs for storage-provisioner [e4a387b28249] ...
	I0816 05:34:26.102991    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a387b28249"
	I0816 05:34:26.115086    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:34:26.115096    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:34:26.139631    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:34:26.139642    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:34:26.183534    8654 logs.go:123] Gathering logs for etcd [c5598fa8291b] ...
	I0816 05:34:26.183550    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5598fa8291b"
	I0816 05:34:26.194501    8654 logs.go:123] Gathering logs for kube-scheduler [82a7160cf6b3] ...
	I0816 05:34:26.194521    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a7160cf6b3"
	I0816 05:34:26.206121    8654 logs.go:123] Gathering logs for kube-proxy [41826d2a89be] ...
	I0816 05:34:26.206132    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41826d2a89be"
	I0816 05:34:26.227375    8654 logs.go:123] Gathering logs for kube-controller-manager [258b4e54effd] ...
	I0816 05:34:26.227387    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258b4e54effd"
	I0816 05:34:26.244096    8654 logs.go:123] Gathering logs for etcd [908e9b841803] ...
	I0816 05:34:26.244108    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 908e9b841803"
	I0816 05:34:26.262316    8654 logs.go:123] Gathering logs for kube-apiserver [7da996bebe3e] ...
	I0816 05:34:26.262328    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da996bebe3e"
	I0816 05:34:26.273416    8654 logs.go:123] Gathering logs for kube-scheduler [be9ff0533784] ...
	I0816 05:34:26.273430    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be9ff0533784"
	I0816 05:34:26.287873    8654 logs.go:123] Gathering logs for storage-provisioner [da3ee567efaa] ...
	I0816 05:34:26.287883    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da3ee567efaa"
	I0816 05:34:26.299168    8654 logs.go:123] Gathering logs for kube-apiserver [1c1df0a24283] ...
	I0816 05:34:26.299181    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c1df0a24283"
	I0816 05:34:28.815702    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:34:33.816180    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:34:33.816374    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:34:33.828720    8654 logs.go:276] 2 containers: [1c1df0a24283 7da996bebe3e]
	I0816 05:34:33.828799    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:34:33.841046    8654 logs.go:276] 2 containers: [908e9b841803 c5598fa8291b]
	I0816 05:34:33.841119    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:34:33.853287    8654 logs.go:276] 1 containers: [f86c0ca08a29]
	I0816 05:34:33.853357    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:34:33.872270    8654 logs.go:276] 2 containers: [82a7160cf6b3 be9ff0533784]
	I0816 05:34:33.872364    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:34:33.890144    8654 logs.go:276] 1 containers: [41826d2a89be]
	I0816 05:34:33.890218    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:34:33.903461    8654 logs.go:276] 2 containers: [09e3f6eaf95c 258b4e54effd]
	I0816 05:34:33.903531    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:34:33.915837    8654 logs.go:276] 0 containers: []
	W0816 05:34:33.915849    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:34:33.915908    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:34:33.929862    8654 logs.go:276] 2 containers: [da3ee567efaa e4a387b28249]
	I0816 05:34:33.929881    8654 logs.go:123] Gathering logs for storage-provisioner [da3ee567efaa] ...
	I0816 05:34:33.929889    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da3ee567efaa"
	I0816 05:34:33.942544    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:34:33.942558    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:34:33.980886    8654 logs.go:123] Gathering logs for etcd [c5598fa8291b] ...
	I0816 05:34:33.980896    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5598fa8291b"
	I0816 05:34:33.993046    8654 logs.go:123] Gathering logs for kube-scheduler [be9ff0533784] ...
	I0816 05:34:33.993059    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be9ff0533784"
	I0816 05:34:34.008290    8654 logs.go:123] Gathering logs for kube-controller-manager [09e3f6eaf95c] ...
	I0816 05:34:34.008301    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e3f6eaf95c"
	I0816 05:34:34.025617    8654 logs.go:123] Gathering logs for kube-controller-manager [258b4e54effd] ...
	I0816 05:34:34.025627    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258b4e54effd"
	I0816 05:34:34.039233    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:34:34.039244    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:34:34.043746    8654 logs.go:123] Gathering logs for coredns [f86c0ca08a29] ...
	I0816 05:34:34.043753    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86c0ca08a29"
	I0816 05:34:34.057888    8654 logs.go:123] Gathering logs for kube-apiserver [7da996bebe3e] ...
	I0816 05:34:34.057900    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da996bebe3e"
	I0816 05:34:34.071590    8654 logs.go:123] Gathering logs for etcd [908e9b841803] ...
	I0816 05:34:34.071604    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 908e9b841803"
	I0816 05:34:34.085184    8654 logs.go:123] Gathering logs for kube-scheduler [82a7160cf6b3] ...
	I0816 05:34:34.085195    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a7160cf6b3"
	I0816 05:34:34.097707    8654 logs.go:123] Gathering logs for kube-proxy [41826d2a89be] ...
	I0816 05:34:34.097717    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41826d2a89be"
	I0816 05:34:34.109698    8654 logs.go:123] Gathering logs for storage-provisioner [e4a387b28249] ...
	I0816 05:34:34.109711    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a387b28249"
	I0816 05:34:34.121647    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:34:34.121656    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:34:34.145607    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:34:34.145617    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:34:34.189270    8654 logs.go:123] Gathering logs for kube-apiserver [1c1df0a24283] ...
	I0816 05:34:34.189293    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c1df0a24283"
	I0816 05:34:34.205936    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:34:34.205957    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:34:36.720545    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:34:41.723152    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:34:41.723568    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:34:41.762694    8654 logs.go:276] 2 containers: [1c1df0a24283 7da996bebe3e]
	I0816 05:34:41.762840    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:34:41.784299    8654 logs.go:276] 2 containers: [908e9b841803 c5598fa8291b]
	I0816 05:34:41.784407    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:34:41.799029    8654 logs.go:276] 1 containers: [f86c0ca08a29]
	I0816 05:34:41.799105    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:34:41.811687    8654 logs.go:276] 2 containers: [82a7160cf6b3 be9ff0533784]
	I0816 05:34:41.811766    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:34:41.822404    8654 logs.go:276] 1 containers: [41826d2a89be]
	I0816 05:34:41.822465    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:34:41.833543    8654 logs.go:276] 2 containers: [09e3f6eaf95c 258b4e54effd]
	I0816 05:34:41.833606    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:34:41.844547    8654 logs.go:276] 0 containers: []
	W0816 05:34:41.844559    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:34:41.844622    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:34:41.855664    8654 logs.go:276] 2 containers: [da3ee567efaa e4a387b28249]
	I0816 05:34:41.855680    8654 logs.go:123] Gathering logs for etcd [c5598fa8291b] ...
	I0816 05:34:41.855686    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5598fa8291b"
	I0816 05:34:41.867255    8654 logs.go:123] Gathering logs for kube-controller-manager [09e3f6eaf95c] ...
	I0816 05:34:41.867268    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e3f6eaf95c"
	I0816 05:34:41.886434    8654 logs.go:123] Gathering logs for storage-provisioner [da3ee567efaa] ...
	I0816 05:34:41.886446    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da3ee567efaa"
	I0816 05:34:41.903125    8654 logs.go:123] Gathering logs for storage-provisioner [e4a387b28249] ...
	I0816 05:34:41.903137    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a387b28249"
	I0816 05:34:41.914343    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:34:41.914352    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:34:41.938443    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:34:41.938453    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:34:41.949902    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:34:41.949918    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:34:41.992115    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:34:41.992125    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:34:41.996332    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:34:41.996338    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:34:42.030722    8654 logs.go:123] Gathering logs for kube-scheduler [82a7160cf6b3] ...
	I0816 05:34:42.030732    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a7160cf6b3"
	I0816 05:34:42.042628    8654 logs.go:123] Gathering logs for kube-apiserver [1c1df0a24283] ...
	I0816 05:34:42.042642    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c1df0a24283"
	I0816 05:34:42.057353    8654 logs.go:123] Gathering logs for coredns [f86c0ca08a29] ...
	I0816 05:34:42.057366    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86c0ca08a29"
	I0816 05:34:42.075362    8654 logs.go:123] Gathering logs for kube-proxy [41826d2a89be] ...
	I0816 05:34:42.075392    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41826d2a89be"
	I0816 05:34:42.091149    8654 logs.go:123] Gathering logs for kube-apiserver [7da996bebe3e] ...
	I0816 05:34:42.091160    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da996bebe3e"
	I0816 05:34:42.104847    8654 logs.go:123] Gathering logs for etcd [908e9b841803] ...
	I0816 05:34:42.104864    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 908e9b841803"
	I0816 05:34:42.118884    8654 logs.go:123] Gathering logs for kube-scheduler [be9ff0533784] ...
	I0816 05:34:42.118898    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be9ff0533784"
	I0816 05:34:42.133606    8654 logs.go:123] Gathering logs for kube-controller-manager [258b4e54effd] ...
	I0816 05:34:42.133619    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258b4e54effd"
	I0816 05:34:44.645131    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:34:49.647209    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:34:49.647311    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:34:49.658329    8654 logs.go:276] 2 containers: [1c1df0a24283 7da996bebe3e]
	I0816 05:34:49.658400    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:34:49.669186    8654 logs.go:276] 2 containers: [908e9b841803 c5598fa8291b]
	I0816 05:34:49.669256    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:34:49.679769    8654 logs.go:276] 1 containers: [f86c0ca08a29]
	I0816 05:34:49.679844    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:34:49.690232    8654 logs.go:276] 2 containers: [82a7160cf6b3 be9ff0533784]
	I0816 05:34:49.690308    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:34:49.701303    8654 logs.go:276] 1 containers: [41826d2a89be]
	I0816 05:34:49.701420    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:34:49.712547    8654 logs.go:276] 2 containers: [09e3f6eaf95c 258b4e54effd]
	I0816 05:34:49.712630    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:34:49.722613    8654 logs.go:276] 0 containers: []
	W0816 05:34:49.722628    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:34:49.722690    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:34:49.733402    8654 logs.go:276] 2 containers: [da3ee567efaa e4a387b28249]
	I0816 05:34:49.733421    8654 logs.go:123] Gathering logs for kube-scheduler [82a7160cf6b3] ...
	I0816 05:34:49.733430    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a7160cf6b3"
	I0816 05:34:49.745535    8654 logs.go:123] Gathering logs for storage-provisioner [da3ee567efaa] ...
	I0816 05:34:49.745546    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da3ee567efaa"
	I0816 05:34:49.757413    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:34:49.757426    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:34:49.782072    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:34:49.782080    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:34:49.817710    8654 logs.go:123] Gathering logs for kube-apiserver [7da996bebe3e] ...
	I0816 05:34:49.817722    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da996bebe3e"
	I0816 05:34:49.829385    8654 logs.go:123] Gathering logs for etcd [908e9b841803] ...
	I0816 05:34:49.829397    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 908e9b841803"
	I0816 05:34:49.843692    8654 logs.go:123] Gathering logs for kube-proxy [41826d2a89be] ...
	I0816 05:34:49.843707    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41826d2a89be"
	I0816 05:34:49.855360    8654 logs.go:123] Gathering logs for kube-apiserver [1c1df0a24283] ...
	I0816 05:34:49.855371    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c1df0a24283"
	I0816 05:34:49.869660    8654 logs.go:123] Gathering logs for coredns [f86c0ca08a29] ...
	I0816 05:34:49.869672    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86c0ca08a29"
	I0816 05:34:49.894468    8654 logs.go:123] Gathering logs for kube-scheduler [be9ff0533784] ...
	I0816 05:34:49.894480    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be9ff0533784"
	I0816 05:34:49.909597    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:34:49.909607    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:34:49.914115    8654 logs.go:123] Gathering logs for kube-controller-manager [09e3f6eaf95c] ...
	I0816 05:34:49.914122    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e3f6eaf95c"
	I0816 05:34:49.932844    8654 logs.go:123] Gathering logs for kube-controller-manager [258b4e54effd] ...
	I0816 05:34:49.932860    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258b4e54effd"
	I0816 05:34:49.945191    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:34:49.945201    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:34:49.957217    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:34:49.957234    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:34:50.000710    8654 logs.go:123] Gathering logs for etcd [c5598fa8291b] ...
	I0816 05:34:50.000720    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5598fa8291b"
	I0816 05:34:50.012468    8654 logs.go:123] Gathering logs for storage-provisioner [e4a387b28249] ...
	I0816 05:34:50.012486    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a387b28249"
	I0816 05:34:52.526230    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:34:57.528425    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:34:57.528600    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:34:57.540005    8654 logs.go:276] 2 containers: [1c1df0a24283 7da996bebe3e]
	I0816 05:34:57.540079    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:34:57.551041    8654 logs.go:276] 2 containers: [908e9b841803 c5598fa8291b]
	I0816 05:34:57.551111    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:34:57.561519    8654 logs.go:276] 1 containers: [f86c0ca08a29]
	I0816 05:34:57.561593    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:34:57.576177    8654 logs.go:276] 2 containers: [82a7160cf6b3 be9ff0533784]
	I0816 05:34:57.576248    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:34:57.586587    8654 logs.go:276] 1 containers: [41826d2a89be]
	I0816 05:34:57.586656    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:34:57.597254    8654 logs.go:276] 2 containers: [09e3f6eaf95c 258b4e54effd]
	I0816 05:34:57.597328    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:34:57.607926    8654 logs.go:276] 0 containers: []
	W0816 05:34:57.607937    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:34:57.607998    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:34:57.619393    8654 logs.go:276] 2 containers: [da3ee567efaa e4a387b28249]
	I0816 05:34:57.619413    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:34:57.619419    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:34:57.660066    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:34:57.660075    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:34:57.664750    8654 logs.go:123] Gathering logs for kube-apiserver [1c1df0a24283] ...
	I0816 05:34:57.664757    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c1df0a24283"
	I0816 05:34:57.680333    8654 logs.go:123] Gathering logs for kube-controller-manager [09e3f6eaf95c] ...
	I0816 05:34:57.680348    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e3f6eaf95c"
	I0816 05:34:57.698328    8654 logs.go:123] Gathering logs for storage-provisioner [da3ee567efaa] ...
	I0816 05:34:57.698342    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da3ee567efaa"
	I0816 05:34:57.710037    8654 logs.go:123] Gathering logs for kube-scheduler [be9ff0533784] ...
	I0816 05:34:57.710048    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be9ff0533784"
	I0816 05:34:57.730468    8654 logs.go:123] Gathering logs for kube-controller-manager [258b4e54effd] ...
	I0816 05:34:57.730478    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258b4e54effd"
	I0816 05:34:57.742135    8654 logs.go:123] Gathering logs for storage-provisioner [e4a387b28249] ...
	I0816 05:34:57.742150    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a387b28249"
	I0816 05:34:57.755146    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:34:57.755159    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:34:57.791281    8654 logs.go:123] Gathering logs for kube-apiserver [7da996bebe3e] ...
	I0816 05:34:57.791294    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da996bebe3e"
	I0816 05:34:57.802745    8654 logs.go:123] Gathering logs for etcd [908e9b841803] ...
	I0816 05:34:57.802756    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 908e9b841803"
	I0816 05:34:57.817398    8654 logs.go:123] Gathering logs for etcd [c5598fa8291b] ...
	I0816 05:34:57.817408    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5598fa8291b"
	I0816 05:34:57.828376    8654 logs.go:123] Gathering logs for kube-scheduler [82a7160cf6b3] ...
	I0816 05:34:57.828390    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a7160cf6b3"
	I0816 05:34:57.841297    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:34:57.841309    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:34:57.864221    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:34:57.864231    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:34:57.876166    8654 logs.go:123] Gathering logs for coredns [f86c0ca08a29] ...
	I0816 05:34:57.876179    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86c0ca08a29"
	I0816 05:34:57.888638    8654 logs.go:123] Gathering logs for kube-proxy [41826d2a89be] ...
	I0816 05:34:57.888648    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41826d2a89be"
	I0816 05:35:00.403266    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:35:05.405436    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:35:05.405639    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:35:05.420025    8654 logs.go:276] 2 containers: [1c1df0a24283 7da996bebe3e]
	I0816 05:35:05.420110    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:35:05.431600    8654 logs.go:276] 2 containers: [908e9b841803 c5598fa8291b]
	I0816 05:35:05.431683    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:35:05.442235    8654 logs.go:276] 1 containers: [f86c0ca08a29]
	I0816 05:35:05.442312    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:35:05.452656    8654 logs.go:276] 2 containers: [82a7160cf6b3 be9ff0533784]
	I0816 05:35:05.452724    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:35:05.463340    8654 logs.go:276] 1 containers: [41826d2a89be]
	I0816 05:35:05.463408    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:35:05.474402    8654 logs.go:276] 2 containers: [09e3f6eaf95c 258b4e54effd]
	I0816 05:35:05.474469    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:35:05.485025    8654 logs.go:276] 0 containers: []
	W0816 05:35:05.485038    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:35:05.485101    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:35:05.495319    8654 logs.go:276] 2 containers: [da3ee567efaa e4a387b28249]
	I0816 05:35:05.495335    8654 logs.go:123] Gathering logs for etcd [c5598fa8291b] ...
	I0816 05:35:05.495343    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5598fa8291b"
	I0816 05:35:05.506632    8654 logs.go:123] Gathering logs for coredns [f86c0ca08a29] ...
	I0816 05:35:05.506645    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86c0ca08a29"
	I0816 05:35:05.518196    8654 logs.go:123] Gathering logs for kube-scheduler [82a7160cf6b3] ...
	I0816 05:35:05.518209    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a7160cf6b3"
	I0816 05:35:05.529923    8654 logs.go:123] Gathering logs for kube-controller-manager [258b4e54effd] ...
	I0816 05:35:05.529936    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258b4e54effd"
	I0816 05:35:05.541671    8654 logs.go:123] Gathering logs for storage-provisioner [da3ee567efaa] ...
	I0816 05:35:05.541702    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da3ee567efaa"
	I0816 05:35:05.553510    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:35:05.553521    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:35:05.577246    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:35:05.577256    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:35:05.618826    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:35:05.618836    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:35:05.653583    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:35:05.653594    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:35:05.666887    8654 logs.go:123] Gathering logs for kube-apiserver [7da996bebe3e] ...
	I0816 05:35:05.666910    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da996bebe3e"
	I0816 05:35:05.677866    8654 logs.go:123] Gathering logs for storage-provisioner [e4a387b28249] ...
	I0816 05:35:05.677878    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a387b28249"
	I0816 05:35:05.689528    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:35:05.689540    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:35:05.693964    8654 logs.go:123] Gathering logs for etcd [908e9b841803] ...
	I0816 05:35:05.693973    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 908e9b841803"
	I0816 05:35:05.712621    8654 logs.go:123] Gathering logs for kube-proxy [41826d2a89be] ...
	I0816 05:35:05.712631    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41826d2a89be"
	I0816 05:35:05.724574    8654 logs.go:123] Gathering logs for kube-controller-manager [09e3f6eaf95c] ...
	I0816 05:35:05.724588    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e3f6eaf95c"
	I0816 05:35:05.743828    8654 logs.go:123] Gathering logs for kube-apiserver [1c1df0a24283] ...
	I0816 05:35:05.743838    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c1df0a24283"
	I0816 05:35:05.761186    8654 logs.go:123] Gathering logs for kube-scheduler [be9ff0533784] ...
	I0816 05:35:05.761196    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be9ff0533784"
	I0816 05:35:08.278487    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:35:13.280300    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:35:13.280409    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:35:13.291420    8654 logs.go:276] 2 containers: [1c1df0a24283 7da996bebe3e]
	I0816 05:35:13.291497    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:35:13.301939    8654 logs.go:276] 2 containers: [908e9b841803 c5598fa8291b]
	I0816 05:35:13.302015    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:35:13.312665    8654 logs.go:276] 1 containers: [f86c0ca08a29]
	I0816 05:35:13.312736    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:35:13.323816    8654 logs.go:276] 2 containers: [82a7160cf6b3 be9ff0533784]
	I0816 05:35:13.323889    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:35:13.334472    8654 logs.go:276] 1 containers: [41826d2a89be]
	I0816 05:35:13.334542    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:35:13.345049    8654 logs.go:276] 2 containers: [09e3f6eaf95c 258b4e54effd]
	I0816 05:35:13.345121    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:35:13.355231    8654 logs.go:276] 0 containers: []
	W0816 05:35:13.355242    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:35:13.355301    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:35:13.365696    8654 logs.go:276] 2 containers: [da3ee567efaa e4a387b28249]
	I0816 05:35:13.365713    8654 logs.go:123] Gathering logs for kube-controller-manager [258b4e54effd] ...
	I0816 05:35:13.365719    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258b4e54effd"
	I0816 05:35:13.377019    8654 logs.go:123] Gathering logs for kube-apiserver [1c1df0a24283] ...
	I0816 05:35:13.377029    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c1df0a24283"
	I0816 05:35:13.390998    8654 logs.go:123] Gathering logs for etcd [908e9b841803] ...
	I0816 05:35:13.391008    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 908e9b841803"
	I0816 05:35:13.405341    8654 logs.go:123] Gathering logs for coredns [f86c0ca08a29] ...
	I0816 05:35:13.405448    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86c0ca08a29"
	I0816 05:35:13.417046    8654 logs.go:123] Gathering logs for kube-controller-manager [09e3f6eaf95c] ...
	I0816 05:35:13.417059    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e3f6eaf95c"
	I0816 05:35:13.434368    8654 logs.go:123] Gathering logs for storage-provisioner [da3ee567efaa] ...
	I0816 05:35:13.434382    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da3ee567efaa"
	I0816 05:35:13.446260    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:35:13.446271    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:35:13.470290    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:35:13.470301    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:35:13.512708    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:35:13.512725    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:35:13.547621    8654 logs.go:123] Gathering logs for kube-proxy [41826d2a89be] ...
	I0816 05:35:13.547634    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41826d2a89be"
	I0816 05:35:13.559643    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:35:13.559654    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:35:13.571278    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:35:13.571291    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:35:13.575792    8654 logs.go:123] Gathering logs for kube-scheduler [82a7160cf6b3] ...
	I0816 05:35:13.575799    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a7160cf6b3"
	I0816 05:35:13.587559    8654 logs.go:123] Gathering logs for kube-scheduler [be9ff0533784] ...
	I0816 05:35:13.587571    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be9ff0533784"
	I0816 05:35:13.602594    8654 logs.go:123] Gathering logs for storage-provisioner [e4a387b28249] ...
	I0816 05:35:13.602605    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a387b28249"
	I0816 05:35:13.613928    8654 logs.go:123] Gathering logs for kube-apiserver [7da996bebe3e] ...
	I0816 05:35:13.613939    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da996bebe3e"
	I0816 05:35:13.625344    8654 logs.go:123] Gathering logs for etcd [c5598fa8291b] ...
	I0816 05:35:13.625357    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5598fa8291b"
	I0816 05:35:16.138649    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:35:21.140473    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:35:21.140574    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:35:21.151911    8654 logs.go:276] 2 containers: [1c1df0a24283 7da996bebe3e]
	I0816 05:35:21.151994    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:35:21.162731    8654 logs.go:276] 2 containers: [908e9b841803 c5598fa8291b]
	I0816 05:35:21.162804    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:35:21.173072    8654 logs.go:276] 1 containers: [f86c0ca08a29]
	I0816 05:35:21.173141    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:35:21.184016    8654 logs.go:276] 2 containers: [82a7160cf6b3 be9ff0533784]
	I0816 05:35:21.184084    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:35:21.195284    8654 logs.go:276] 1 containers: [41826d2a89be]
	I0816 05:35:21.195356    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:35:21.205715    8654 logs.go:276] 2 containers: [09e3f6eaf95c 258b4e54effd]
	I0816 05:35:21.205792    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:35:21.216114    8654 logs.go:276] 0 containers: []
	W0816 05:35:21.216126    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:35:21.216188    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:35:21.226465    8654 logs.go:276] 2 containers: [da3ee567efaa e4a387b28249]
	I0816 05:35:21.226480    8654 logs.go:123] Gathering logs for etcd [c5598fa8291b] ...
	I0816 05:35:21.226486    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5598fa8291b"
	I0816 05:35:21.237893    8654 logs.go:123] Gathering logs for coredns [f86c0ca08a29] ...
	I0816 05:35:21.237905    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86c0ca08a29"
	I0816 05:35:21.253028    8654 logs.go:123] Gathering logs for storage-provisioner [da3ee567efaa] ...
	I0816 05:35:21.253040    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da3ee567efaa"
	I0816 05:35:21.266341    8654 logs.go:123] Gathering logs for storage-provisioner [e4a387b28249] ...
	I0816 05:35:21.266351    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a387b28249"
	I0816 05:35:21.278062    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:35:21.278073    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:35:21.301833    8654 logs.go:123] Gathering logs for kube-apiserver [1c1df0a24283] ...
	I0816 05:35:21.301840    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c1df0a24283"
	I0816 05:35:21.316576    8654 logs.go:123] Gathering logs for kube-proxy [41826d2a89be] ...
	I0816 05:35:21.316588    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41826d2a89be"
	I0816 05:35:21.328696    8654 logs.go:123] Gathering logs for kube-controller-manager [258b4e54effd] ...
	I0816 05:35:21.328707    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258b4e54effd"
	I0816 05:35:21.340759    8654 logs.go:123] Gathering logs for kube-scheduler [be9ff0533784] ...
	I0816 05:35:21.340770    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be9ff0533784"
	I0816 05:35:21.356650    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:35:21.356660    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:35:21.395322    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:35:21.395334    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:35:21.436604    8654 logs.go:123] Gathering logs for kube-apiserver [7da996bebe3e] ...
	I0816 05:35:21.436612    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da996bebe3e"
	I0816 05:35:21.455375    8654 logs.go:123] Gathering logs for etcd [908e9b841803] ...
	I0816 05:35:21.455388    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 908e9b841803"
	I0816 05:35:21.469138    8654 logs.go:123] Gathering logs for kube-scheduler [82a7160cf6b3] ...
	I0816 05:35:21.469148    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a7160cf6b3"
	I0816 05:35:21.491202    8654 logs.go:123] Gathering logs for kube-controller-manager [09e3f6eaf95c] ...
	I0816 05:35:21.491213    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e3f6eaf95c"
	I0816 05:35:21.518423    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:35:21.518436    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:35:21.531000    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:35:21.531011    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:35:24.037649    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:35:29.040272    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:35:29.040385    8654 kubeadm.go:597] duration metric: took 4m4.580362875s to restartPrimaryControlPlane
	W0816 05:35:29.040463    8654 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 05:35:29.040500    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0816 05:35:30.043030    8654 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.002534208s)
	I0816 05:35:30.043093    8654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 05:35:30.048124    8654 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 05:35:30.051426    8654 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 05:35:30.054201    8654 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 05:35:30.054207    8654 kubeadm.go:157] found existing configuration files:
	
	I0816 05:35:30.054234    8654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51173 /etc/kubernetes/admin.conf
	I0816 05:35:30.056738    8654 kubeadm.go:163] "https://control-plane.minikube.internal:51173" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51173 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 05:35:30.056762    8654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 05:35:30.059964    8654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51173 /etc/kubernetes/kubelet.conf
	I0816 05:35:30.063038    8654 kubeadm.go:163] "https://control-plane.minikube.internal:51173" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51173 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 05:35:30.063066    8654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 05:35:30.065727    8654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51173 /etc/kubernetes/controller-manager.conf
	I0816 05:35:30.068574    8654 kubeadm.go:163] "https://control-plane.minikube.internal:51173" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51173 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 05:35:30.068595    8654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 05:35:30.071727    8654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51173 /etc/kubernetes/scheduler.conf
	I0816 05:35:30.074394    8654 kubeadm.go:163] "https://control-plane.minikube.internal:51173" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51173 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 05:35:30.074415    8654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 05:35:30.076923    8654 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 05:35:30.093763    8654 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0816 05:35:30.093807    8654 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 05:35:30.142570    8654 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 05:35:30.142629    8654 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 05:35:30.142683    8654 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 05:35:30.194901    8654 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 05:35:30.200146    8654 out.go:235]   - Generating certificates and keys ...
	I0816 05:35:30.200183    8654 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 05:35:30.200224    8654 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 05:35:30.200271    8654 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 05:35:30.200317    8654 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 05:35:30.200356    8654 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 05:35:30.200388    8654 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 05:35:30.200427    8654 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 05:35:30.200459    8654 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 05:35:30.200496    8654 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 05:35:30.200531    8654 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 05:35:30.200558    8654 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 05:35:30.200587    8654 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 05:35:30.371424    8654 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 05:35:30.651194    8654 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 05:35:30.727753    8654 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 05:35:30.832352    8654 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 05:35:30.865042    8654 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 05:35:30.865330    8654 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 05:35:30.865381    8654 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 05:35:30.934119    8654 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 05:35:30.939208    8654 out.go:235]   - Booting up control plane ...
	I0816 05:35:30.939263    8654 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 05:35:30.939326    8654 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 05:35:30.939366    8654 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 05:35:30.939408    8654 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 05:35:30.939493    8654 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 05:35:35.945735    8654 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.007389 seconds
	I0816 05:35:35.945921    8654 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 05:35:35.960715    8654 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 05:35:36.478363    8654 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 05:35:36.478482    8654 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-607000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 05:35:36.982555    8654 kubeadm.go:310] [bootstrap-token] Using token: zhwf8w.yn3s54awl8nvlo1t
	I0816 05:35:36.986096    8654 out.go:235]   - Configuring RBAC rules ...
	I0816 05:35:36.986160    8654 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 05:35:36.986205    8654 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 05:35:36.988222    8654 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 05:35:36.992454    8654 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 05:35:36.993265    8654 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 05:35:36.994450    8654 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 05:35:36.997548    8654 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 05:35:37.172024    8654 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 05:35:37.386124    8654 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 05:35:37.386695    8654 kubeadm.go:310] 
	I0816 05:35:37.386729    8654 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 05:35:37.386733    8654 kubeadm.go:310] 
	I0816 05:35:37.386782    8654 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 05:35:37.386796    8654 kubeadm.go:310] 
	I0816 05:35:37.386818    8654 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 05:35:37.386848    8654 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 05:35:37.386882    8654 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 05:35:37.386886    8654 kubeadm.go:310] 
	I0816 05:35:37.386920    8654 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 05:35:37.386928    8654 kubeadm.go:310] 
	I0816 05:35:37.386959    8654 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 05:35:37.386962    8654 kubeadm.go:310] 
	I0816 05:35:37.386988    8654 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 05:35:37.387043    8654 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 05:35:37.387078    8654 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 05:35:37.387081    8654 kubeadm.go:310] 
	I0816 05:35:37.387123    8654 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 05:35:37.387186    8654 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 05:35:37.387192    8654 kubeadm.go:310] 
	I0816 05:35:37.387268    8654 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token zhwf8w.yn3s54awl8nvlo1t \
	I0816 05:35:37.387319    8654 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:23cf10825d548a004e2d3ef8e1c65218486081db837b36803636fece4fac457f \
	I0816 05:35:37.387331    8654 kubeadm.go:310] 	--control-plane 
	I0816 05:35:37.387336    8654 kubeadm.go:310] 
	I0816 05:35:37.387378    8654 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 05:35:37.387382    8654 kubeadm.go:310] 
	I0816 05:35:37.387422    8654 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token zhwf8w.yn3s54awl8nvlo1t \
	I0816 05:35:37.387475    8654 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:23cf10825d548a004e2d3ef8e1c65218486081db837b36803636fece4fac457f 
	I0816 05:35:37.387704    8654 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 05:35:37.387715    8654 cni.go:84] Creating CNI manager for ""
	I0816 05:35:37.387723    8654 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 05:35:37.391367    8654 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 05:35:37.398289    8654 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 05:35:37.401507    8654 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 05:35:37.406994    8654 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 05:35:37.407062    8654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 05:35:37.407069    8654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-607000 minikube.k8s.io/updated_at=2024_08_16T05_35_37_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05 minikube.k8s.io/name=running-upgrade-607000 minikube.k8s.io/primary=true
	I0816 05:35:37.449384    8654 ops.go:34] apiserver oom_adj: -16
	I0816 05:35:37.449456    8654 kubeadm.go:1113] duration metric: took 42.437375ms to wait for elevateKubeSystemPrivileges
	I0816 05:35:37.449467    8654 kubeadm.go:394] duration metric: took 4m13.003487625s to StartCluster
	I0816 05:35:37.449476    8654 settings.go:142] acquiring lock: {Name:mkec9dae897ed6cd1355cb2ba10161c54c163fba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:35:37.449648    8654 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:35:37.450051    8654 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-6249/kubeconfig: {Name:mka7b2a1dac03f0ea4ac28563b4fe884a2b1b206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:35:37.450273    8654 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:35:37.450302    8654 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 05:35:37.450341    8654 config.go:182] Loaded profile config "running-upgrade-607000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0816 05:35:37.450344    8654 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-607000"
	I0816 05:35:37.450341    8654 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-607000"
	I0816 05:35:37.450366    8654 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-607000"
	I0816 05:35:37.450370    8654 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-607000"
	W0816 05:35:37.450386    8654 addons.go:243] addon storage-provisioner should already be in state true
	I0816 05:35:37.450395    8654 host.go:66] Checking if "running-upgrade-607000" exists ...
	I0816 05:35:37.451531    8654 kapi.go:59] client config for running-upgrade-607000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/running-upgrade-607000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/running-upgrade-607000/client.key", CAFile:"/Users/jenkins/minikube-integration/19423-6249/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101ee1610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 05:35:37.452416    8654 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-607000"
	W0816 05:35:37.452429    8654 addons.go:243] addon default-storageclass should already be in state true
	I0816 05:35:37.452442    8654 host.go:66] Checking if "running-upgrade-607000" exists ...
	I0816 05:35:37.454305    8654 out.go:177] * Verifying Kubernetes components...
	I0816 05:35:37.454731    8654 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 05:35:37.458571    8654 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 05:35:37.458578    8654 sshutil.go:53] new ssh client: &{IP:localhost Port:51141 SSHKeyPath:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/running-upgrade-607000/id_rsa Username:docker}
	I0816 05:35:37.462332    8654 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 05:35:37.466278    8654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:35:37.470356    8654 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 05:35:37.470364    8654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 05:35:37.470371    8654 sshutil.go:53] new ssh client: &{IP:localhost Port:51141 SSHKeyPath:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/running-upgrade-607000/id_rsa Username:docker}
	I0816 05:35:37.538942    8654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 05:35:37.543779    8654 api_server.go:52] waiting for apiserver process to appear ...
	I0816 05:35:37.543822    8654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 05:35:37.547805    8654 api_server.go:72] duration metric: took 97.523792ms to wait for apiserver process to appear ...
	I0816 05:35:37.547814    8654 api_server.go:88] waiting for apiserver healthz status ...
	I0816 05:35:37.547821    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:35:37.561387    8654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 05:35:37.607545    8654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 05:35:37.879636    8654 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0816 05:35:37.879647    8654 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0816 05:35:42.549843    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:35:42.549887    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:35:47.550069    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:35:47.550094    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:35:52.550308    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:35:52.550335    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:35:57.550641    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:35:57.550669    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:36:02.551087    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:36:02.551126    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:36:07.551759    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:36:07.551808    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0816 05:36:07.881556    8654 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0816 05:36:07.886278    8654 out.go:177] * Enabled addons: storage-provisioner
	I0816 05:36:07.896193    8654 addons.go:510] duration metric: took 30.446415333s for enable addons: enabled=[storage-provisioner]
	I0816 05:36:12.552615    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:36:12.552645    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:36:17.553581    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:36:17.553596    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:36:22.554792    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:36:22.554816    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:36:27.556378    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:36:27.556412    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:36:32.557862    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:36:32.557886    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:36:37.560027    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:36:37.560141    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:36:37.576906    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:36:37.576981    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:36:37.588896    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:36:37.588977    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:36:37.599309    8654 logs.go:276] 2 containers: [e87bc196aca8 fbb13a6d2faf]
	I0816 05:36:37.599385    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:36:37.611601    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:36:37.611668    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:36:37.621779    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:36:37.621840    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:36:37.638009    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:36:37.638083    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:36:37.652272    8654 logs.go:276] 0 containers: []
	W0816 05:36:37.652285    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:36:37.652337    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:36:37.662582    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:36:37.662597    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:36:37.662603    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:36:37.666894    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:36:37.666900    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:36:37.702279    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:36:37.702294    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:36:37.716534    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:36:37.716544    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:36:37.728292    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:36:37.728302    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:36:37.749815    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:36:37.749827    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:36:37.760763    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:36:37.760773    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:36:37.795752    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:36:37.795761    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:36:37.807187    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:36:37.807197    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:36:37.824244    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:36:37.824256    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:36:37.842539    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:36:37.842547    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:36:37.867604    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:36:37.867616    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:36:37.879208    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:36:37.879220    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:36:40.399780    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:36:45.402078    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:36:45.402248    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:36:45.414333    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:36:45.414410    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:36:45.425152    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:36:45.425230    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:36:45.435431    8654 logs.go:276] 2 containers: [e87bc196aca8 fbb13a6d2faf]
	I0816 05:36:45.435496    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:36:45.446148    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:36:45.446218    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:36:45.456318    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:36:45.456394    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:36:45.466724    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:36:45.466794    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:36:45.476895    8654 logs.go:276] 0 containers: []
	W0816 05:36:45.476909    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:36:45.476976    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:36:45.487559    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:36:45.487577    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:36:45.487582    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:36:45.498917    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:36:45.498929    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:36:45.515153    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:36:45.515165    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:36:45.529907    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:36:45.529920    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:36:45.540771    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:36:45.540783    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:36:45.575507    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:36:45.575517    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:36:45.580354    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:36:45.580361    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:36:45.594599    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:36:45.594611    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:36:45.608290    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:36:45.608303    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:36:45.632175    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:36:45.632185    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:36:45.643584    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:36:45.643597    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:36:45.679813    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:36:45.679827    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:36:45.696043    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:36:45.696057    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:36:48.219276    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:36:53.221561    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:36:53.221745    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:36:53.238945    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:36:53.239043    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:36:53.259884    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:36:53.259957    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:36:53.270720    8654 logs.go:276] 2 containers: [e87bc196aca8 fbb13a6d2faf]
	I0816 05:36:53.270794    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:36:53.281692    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:36:53.281759    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:36:53.291968    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:36:53.292034    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:36:53.302404    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:36:53.302469    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:36:53.312219    8654 logs.go:276] 0 containers: []
	W0816 05:36:53.312229    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:36:53.312282    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:36:53.322933    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:36:53.322947    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:36:53.322954    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:36:53.327967    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:36:53.327976    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:36:53.341881    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:36:53.341890    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:36:53.353466    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:36:53.353477    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:36:53.365139    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:36:53.365149    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:36:53.381783    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:36:53.381792    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:36:53.393101    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:36:53.393112    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:36:53.417784    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:36:53.417792    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:36:53.428729    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:36:53.428741    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:36:53.464317    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:36:53.464331    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:36:53.500664    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:36:53.500675    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:36:53.515425    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:36:53.515438    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:36:53.528513    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:36:53.528527    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:36:56.045831    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:37:01.048506    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:37:01.048719    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:37:01.066550    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:37:01.066644    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:37:01.080432    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:37:01.080515    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:37:01.092378    8654 logs.go:276] 2 containers: [e87bc196aca8 fbb13a6d2faf]
	I0816 05:37:01.092450    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:37:01.102802    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:37:01.102869    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:37:01.113362    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:37:01.113436    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:37:01.124420    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:37:01.124499    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:37:01.134952    8654 logs.go:276] 0 containers: []
	W0816 05:37:01.134963    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:37:01.135026    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:37:01.145476    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:37:01.145490    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:37:01.145496    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:37:01.149891    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:37:01.149901    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:37:01.163233    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:37:01.163243    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:37:01.175716    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:37:01.175730    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:37:01.187863    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:37:01.187875    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:37:01.212233    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:37:01.212243    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:37:01.236647    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:37:01.236655    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:37:01.271379    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:37:01.271390    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:37:01.289628    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:37:01.289640    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:37:01.301528    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:37:01.301538    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:37:01.316020    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:37:01.316030    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:37:01.327917    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:37:01.327928    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:37:01.339401    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:37:01.339411    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:37:03.885683    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:37:08.888018    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:37:08.888366    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:37:08.925714    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:37:08.925854    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:37:08.948683    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:37:08.948778    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:37:08.963571    8654 logs.go:276] 2 containers: [e87bc196aca8 fbb13a6d2faf]
	I0816 05:37:08.963641    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:37:08.975676    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:37:08.975754    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:37:08.991438    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:37:08.991516    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:37:09.005626    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:37:09.005695    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:37:09.015627    8654 logs.go:276] 0 containers: []
	W0816 05:37:09.015638    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:37:09.015699    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:37:09.027504    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:37:09.027521    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:37:09.027526    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:37:09.042105    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:37:09.042119    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:37:09.058731    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:37:09.058745    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:37:09.082908    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:37:09.082916    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:37:09.094806    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:37:09.094820    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:37:09.108661    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:37:09.108673    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:37:09.113162    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:37:09.113171    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:37:09.147518    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:37:09.147531    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:37:09.161744    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:37:09.161754    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:37:09.176208    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:37:09.176221    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:37:09.193143    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:37:09.193153    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:37:09.210573    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:37:09.210584    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:37:09.221712    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:37:09.221723    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:37:11.760095    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:37:16.762500    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:37:16.762833    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:37:16.795008    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:37:16.795140    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:37:16.818284    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:37:16.818375    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:37:16.831535    8654 logs.go:276] 2 containers: [e87bc196aca8 fbb13a6d2faf]
	I0816 05:37:16.831613    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:37:16.843317    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:37:16.843386    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:37:16.854100    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:37:16.854174    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:37:16.865306    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:37:16.865378    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:37:16.875220    8654 logs.go:276] 0 containers: []
	W0816 05:37:16.875232    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:37:16.875290    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:37:16.887156    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:37:16.887169    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:37:16.887174    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:37:16.898964    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:37:16.898974    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:37:16.915965    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:37:16.915975    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:37:16.928350    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:37:16.928360    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:37:16.964001    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:37:16.964011    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:37:16.982659    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:37:16.982672    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:37:16.996126    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:37:16.996138    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:37:17.008288    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:37:17.008298    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:37:17.023717    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:37:17.023732    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:37:17.028405    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:37:17.028412    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:37:17.068482    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:37:17.068497    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:37:17.082697    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:37:17.082707    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:37:17.094393    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:37:17.094403    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:37:19.626499    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:37:24.628677    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:37:24.628824    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:37:24.648030    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:37:24.648126    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:37:24.664125    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:37:24.664199    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:37:24.685230    8654 logs.go:276] 2 containers: [e87bc196aca8 fbb13a6d2faf]
	I0816 05:37:24.685302    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:37:24.695534    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:37:24.695601    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:37:24.705800    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:37:24.705874    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:37:24.716313    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:37:24.716384    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:37:24.726623    8654 logs.go:276] 0 containers: []
	W0816 05:37:24.726635    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:37:24.726698    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:37:24.736921    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:37:24.736939    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:37:24.736944    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:37:24.749597    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:37:24.749608    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:37:24.766847    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:37:24.766857    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:37:24.778714    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:37:24.778724    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:37:24.816341    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:37:24.816351    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:37:24.821027    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:37:24.821036    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:37:24.835083    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:37:24.835092    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:37:24.851682    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:37:24.851694    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:37:24.863339    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:37:24.863350    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:37:24.888459    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:37:24.888465    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:37:24.938880    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:37:24.938892    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:37:24.951314    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:37:24.951323    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:37:24.968927    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:37:24.968936    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:37:27.484734    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:37:32.486860    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:37:32.486974    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:37:32.498031    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:37:32.498112    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:37:32.508823    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:37:32.508899    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:37:32.519953    8654 logs.go:276] 2 containers: [e87bc196aca8 fbb13a6d2faf]
	I0816 05:37:32.520027    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:37:32.530455    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:37:32.530529    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:37:32.541378    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:37:32.541450    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:37:32.551953    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:37:32.552022    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:37:32.565485    8654 logs.go:276] 0 containers: []
	W0816 05:37:32.565496    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:37:32.565561    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:37:32.576440    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:37:32.576456    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:37:32.576463    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:37:32.612429    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:37:32.612438    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:37:32.626766    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:37:32.626777    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:37:32.641506    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:37:32.641516    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:37:32.655993    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:37:32.656003    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:37:32.673133    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:37:32.673143    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:37:32.685026    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:37:32.685036    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:37:32.696362    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:37:32.696372    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:37:32.701246    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:37:32.701254    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:37:32.736003    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:37:32.736015    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:37:32.748518    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:37:32.748528    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:37:32.760267    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:37:32.760277    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:37:32.771625    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:37:32.771636    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:37:35.297300    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:37:40.299720    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:37:40.300053    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:37:40.331933    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:37:40.332055    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:37:40.351200    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:37:40.351289    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:37:40.365698    8654 logs.go:276] 2 containers: [e87bc196aca8 fbb13a6d2faf]
	I0816 05:37:40.365760    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:37:40.377478    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:37:40.377550    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:37:40.389459    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:37:40.389529    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:37:40.401096    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:37:40.401166    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:37:40.411339    8654 logs.go:276] 0 containers: []
	W0816 05:37:40.411353    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:37:40.411414    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:37:40.422050    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:37:40.422070    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:37:40.422077    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:37:40.459812    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:37:40.459829    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:37:40.464761    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:37:40.464768    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:37:40.501986    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:37:40.501998    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:37:40.518036    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:37:40.518050    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:37:40.533228    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:37:40.533240    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:37:40.550672    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:37:40.550684    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:37:40.575736    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:37:40.575746    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:37:40.590345    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:37:40.590360    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:37:40.604222    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:37:40.604232    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:37:40.616275    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:37:40.616285    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:37:40.631439    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:37:40.631449    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:37:40.642668    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:37:40.642677    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:37:43.156080    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:37:48.158388    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:37:48.158611    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:37:48.184467    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:37:48.184587    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:37:48.201627    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:37:48.201714    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:37:48.214785    8654 logs.go:276] 2 containers: [e87bc196aca8 fbb13a6d2faf]
	I0816 05:37:48.214854    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:37:48.226541    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:37:48.226611    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:37:48.237362    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:37:48.237436    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:37:48.247803    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:37:48.247874    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:37:48.262131    8654 logs.go:276] 0 containers: []
	W0816 05:37:48.262142    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:37:48.262202    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:37:48.272466    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:37:48.272482    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:37:48.272487    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:37:48.283998    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:37:48.284008    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:37:48.309120    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:37:48.309135    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:37:48.347222    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:37:48.347240    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:37:48.356354    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:37:48.356368    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:37:48.437820    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:37:48.437834    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:37:48.452422    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:37:48.452434    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:37:48.464477    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:37:48.464491    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:37:48.476226    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:37:48.476236    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:37:48.490226    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:37:48.490239    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:37:48.505902    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:37:48.505913    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:37:48.517595    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:37:48.517605    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:37:48.533177    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:37:48.533187    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:37:51.051275    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:37:56.052696    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:37:56.052902    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:37:56.073304    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:37:56.073394    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:37:56.086399    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:37:56.086476    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:37:56.099064    8654 logs.go:276] 4 containers: [d08c19c2b1cc 4f5615c53c6f e87bc196aca8 fbb13a6d2faf]
	I0816 05:37:56.099136    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:37:56.110401    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:37:56.110468    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:37:56.120811    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:37:56.120878    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:37:56.131739    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:37:56.131808    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:37:56.141930    8654 logs.go:276] 0 containers: []
	W0816 05:37:56.141942    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:37:56.142008    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:37:56.152152    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:37:56.152175    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:37:56.152181    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:37:56.156607    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:37:56.156613    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:37:56.168512    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:37:56.168524    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:37:56.183340    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:37:56.183350    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:37:56.209112    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:37:56.209126    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:37:56.223723    8654 logs.go:123] Gathering logs for coredns [d08c19c2b1cc] ...
	I0816 05:37:56.223736    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08c19c2b1cc"
	I0816 05:37:56.235836    8654 logs.go:123] Gathering logs for coredns [4f5615c53c6f] ...
	I0816 05:37:56.235848    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5615c53c6f"
	I0816 05:37:56.247464    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:37:56.247477    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:37:56.259442    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:37:56.259453    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:37:56.270761    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:37:56.270771    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:37:56.306396    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:37:56.306406    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:37:56.317992    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:37:56.318004    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:37:56.335748    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:37:56.335762    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:37:56.347095    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:37:56.347109    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:37:56.384015    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:37:56.384024    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:37:58.904655    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:38:03.907022    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:38:03.907416    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:38:03.941727    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:38:03.941870    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:38:03.962111    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:38:03.962206    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:38:03.977413    8654 logs.go:276] 4 containers: [d08c19c2b1cc 4f5615c53c6f e87bc196aca8 fbb13a6d2faf]
	I0816 05:38:03.977489    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:38:03.990396    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:38:03.990465    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:38:04.002646    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:38:04.002713    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:38:04.014218    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:38:04.014289    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:38:04.024682    8654 logs.go:276] 0 containers: []
	W0816 05:38:04.024696    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:38:04.024756    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:38:04.035807    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:38:04.035824    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:38:04.035833    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:38:04.040421    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:38:04.040428    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:38:04.063197    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:38:04.063206    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:38:04.097263    8654 logs.go:123] Gathering logs for coredns [4f5615c53c6f] ...
	I0816 05:38:04.097275    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5615c53c6f"
	I0816 05:38:04.108985    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:38:04.109000    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:38:04.120640    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:38:04.120650    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:38:04.132341    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:38:04.132354    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:38:04.145006    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:38:04.145017    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:38:04.188648    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:38:04.188662    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:38:04.206599    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:38:04.206613    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:38:04.221787    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:38:04.221797    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:38:04.236712    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:38:04.236725    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:38:04.258966    8654 logs.go:123] Gathering logs for coredns [d08c19c2b1cc] ...
	I0816 05:38:04.258977    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08c19c2b1cc"
	I0816 05:38:04.270898    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:38:04.270909    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:38:04.286892    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:38:04.286904    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:38:06.804334    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:38:11.806728    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:38:11.806901    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:38:11.830373    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:38:11.830477    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:38:11.845651    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:38:11.845719    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:38:11.858302    8654 logs.go:276] 4 containers: [d08c19c2b1cc 4f5615c53c6f e87bc196aca8 fbb13a6d2faf]
	I0816 05:38:11.858371    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:38:11.869426    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:38:11.869490    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:38:11.880396    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:38:11.880468    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:38:11.890678    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:38:11.890741    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:38:11.901061    8654 logs.go:276] 0 containers: []
	W0816 05:38:11.901074    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:38:11.901138    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:38:11.914354    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:38:11.914370    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:38:11.914376    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:38:11.919341    8654 logs.go:123] Gathering logs for coredns [4f5615c53c6f] ...
	I0816 05:38:11.919347    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5615c53c6f"
	I0816 05:38:11.931197    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:38:11.931208    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:38:11.943087    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:38:11.943100    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:38:11.961747    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:38:11.961761    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:38:11.973623    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:38:11.973634    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:38:12.011266    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:38:12.011281    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:38:12.047193    8654 logs.go:123] Gathering logs for coredns [d08c19c2b1cc] ...
	I0816 05:38:12.047203    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08c19c2b1cc"
	I0816 05:38:12.059085    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:38:12.059094    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:38:12.077247    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:38:12.077256    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:38:12.091759    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:38:12.091773    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:38:12.105664    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:38:12.105677    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:38:12.117148    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:38:12.117158    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:38:12.142387    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:38:12.142396    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:38:12.154816    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:38:12.154830    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:38:14.671819    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:38:19.674056    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:38:19.674179    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:38:19.688219    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:38:19.688299    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:38:19.699768    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:38:19.699847    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:38:19.710129    8654 logs.go:276] 4 containers: [d08c19c2b1cc 4f5615c53c6f e87bc196aca8 fbb13a6d2faf]
	I0816 05:38:19.710197    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:38:19.721017    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:38:19.721088    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:38:19.731451    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:38:19.731520    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:38:19.744794    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:38:19.744869    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:38:19.755311    8654 logs.go:276] 0 containers: []
	W0816 05:38:19.755321    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:38:19.755380    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:38:19.765834    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:38:19.765852    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:38:19.765856    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:38:19.784061    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:38:19.784074    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:38:19.795780    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:38:19.795791    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:38:19.820743    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:38:19.820754    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:38:19.832849    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:38:19.832859    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:38:19.866856    8654 logs.go:123] Gathering logs for coredns [d08c19c2b1cc] ...
	I0816 05:38:19.866865    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08c19c2b1cc"
	I0816 05:38:19.878817    8654 logs.go:123] Gathering logs for coredns [4f5615c53c6f] ...
	I0816 05:38:19.878829    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5615c53c6f"
	I0816 05:38:19.890632    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:38:19.890642    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:38:19.902416    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:38:19.902428    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:38:19.939064    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:38:19.939079    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:38:19.953738    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:38:19.953748    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:38:19.965453    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:38:19.965464    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:38:19.969975    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:38:19.969984    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:38:19.986849    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:38:19.986861    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:38:20.004283    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:38:20.004294    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:38:22.518693    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:38:27.520821    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:38:27.520935    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:38:27.532296    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:38:27.532372    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:38:27.542822    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:38:27.542889    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:38:27.553830    8654 logs.go:276] 4 containers: [d08c19c2b1cc 4f5615c53c6f e87bc196aca8 fbb13a6d2faf]
	I0816 05:38:27.553911    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:38:27.564282    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:38:27.564351    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:38:27.574960    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:38:27.575025    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:38:27.586150    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:38:27.586225    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:38:27.596303    8654 logs.go:276] 0 containers: []
	W0816 05:38:27.596314    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:38:27.596376    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:38:27.608504    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:38:27.608519    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:38:27.608525    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:38:27.644393    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:38:27.644409    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:38:27.659202    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:38:27.659222    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:38:27.672880    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:38:27.672895    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:38:27.689911    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:38:27.689925    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:38:27.702324    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:38:27.702335    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:38:27.714808    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:38:27.714819    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:38:27.732338    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:38:27.732349    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:38:27.744941    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:38:27.744959    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:38:27.749284    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:38:27.749290    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:38:27.785429    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:38:27.785440    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:38:27.797307    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:38:27.797316    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:38:27.821972    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:38:27.821983    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:38:27.837516    8654 logs.go:123] Gathering logs for coredns [d08c19c2b1cc] ...
	I0816 05:38:27.837524    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08c19c2b1cc"
	I0816 05:38:27.848988    8654 logs.go:123] Gathering logs for coredns [4f5615c53c6f] ...
	I0816 05:38:27.848998    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5615c53c6f"
	I0816 05:38:30.362921    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:38:35.365307    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:38:35.365494    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:38:35.385831    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:38:35.385919    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:38:35.399739    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:38:35.399810    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:38:35.415865    8654 logs.go:276] 4 containers: [d08c19c2b1cc 4f5615c53c6f e87bc196aca8 fbb13a6d2faf]
	I0816 05:38:35.415940    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:38:35.427115    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:38:35.427190    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:38:35.437858    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:38:35.437934    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:38:35.448903    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:38:35.448976    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:38:35.460064    8654 logs.go:276] 0 containers: []
	W0816 05:38:35.460075    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:38:35.460135    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:38:35.470363    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:38:35.470380    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:38:35.470385    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:38:35.496167    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:38:35.496179    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:38:35.508782    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:38:35.508792    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:38:35.545176    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:38:35.545190    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:38:35.563706    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:38:35.563720    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:38:35.577158    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:38:35.577171    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:38:35.591139    8654 logs.go:123] Gathering logs for coredns [d08c19c2b1cc] ...
	I0816 05:38:35.591152    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08c19c2b1cc"
	I0816 05:38:35.602763    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:38:35.602776    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:38:35.615722    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:38:35.615733    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:38:35.627422    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:38:35.627435    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:38:35.664300    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:38:35.664309    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:38:35.668501    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:38:35.668508    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:38:35.683340    8654 logs.go:123] Gathering logs for coredns [4f5615c53c6f] ...
	I0816 05:38:35.683352    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5615c53c6f"
	I0816 05:38:35.695547    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:38:35.695561    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:38:35.710494    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:38:35.710509    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:38:38.225248    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:38:43.227870    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:38:43.228360    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:38:43.267090    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:38:43.267228    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:38:43.287479    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:38:43.287574    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:38:43.306565    8654 logs.go:276] 4 containers: [d08c19c2b1cc 4f5615c53c6f e87bc196aca8 fbb13a6d2faf]
	I0816 05:38:43.306645    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:38:43.317630    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:38:43.317709    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:38:43.328634    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:38:43.328706    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:38:43.339082    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:38:43.339163    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:38:43.349802    8654 logs.go:276] 0 containers: []
	W0816 05:38:43.349813    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:38:43.349876    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:38:43.359918    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:38:43.359933    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:38:43.359939    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:38:43.371592    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:38:43.371601    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:38:43.411388    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:38:43.411398    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:38:43.426387    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:38:43.426398    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:38:43.445999    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:38:43.446011    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:38:43.464154    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:38:43.464168    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:38:43.487636    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:38:43.487648    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:38:43.499248    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:38:43.499259    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:38:43.516252    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:38:43.516266    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:38:43.531174    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:38:43.531184    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:38:43.553907    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:38:43.553917    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:38:43.591224    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:38:43.591234    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:38:43.603311    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:38:43.603321    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:38:43.607626    8654 logs.go:123] Gathering logs for coredns [d08c19c2b1cc] ...
	I0816 05:38:43.607635    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08c19c2b1cc"
	I0816 05:38:43.619205    8654 logs.go:123] Gathering logs for coredns [4f5615c53c6f] ...
	I0816 05:38:43.619218    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5615c53c6f"
	I0816 05:38:46.133664    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:38:51.135352    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:38:51.135610    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:38:51.165158    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:38:51.165262    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:38:51.180726    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:38:51.180806    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:38:51.195733    8654 logs.go:276] 4 containers: [d08c19c2b1cc 4f5615c53c6f e87bc196aca8 fbb13a6d2faf]
	I0816 05:38:51.195811    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:38:51.208147    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:38:51.208220    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:38:51.218344    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:38:51.218413    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:38:51.229920    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:38:51.229994    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:38:51.240273    8654 logs.go:276] 0 containers: []
	W0816 05:38:51.240284    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:38:51.240345    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:38:51.251290    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:38:51.251309    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:38:51.251315    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:38:51.255695    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:38:51.255705    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:38:51.291280    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:38:51.291294    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:38:51.305547    8654 logs.go:123] Gathering logs for coredns [4f5615c53c6f] ...
	I0816 05:38:51.305558    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5615c53c6f"
	I0816 05:38:51.316908    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:38:51.316919    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:38:51.352744    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:38:51.352753    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:38:51.367826    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:38:51.367841    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:38:51.379403    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:38:51.379413    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:38:51.390327    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:38:51.390340    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:38:51.404515    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:38:51.404526    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:38:51.416162    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:38:51.416173    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:38:51.428550    8654 logs.go:123] Gathering logs for coredns [d08c19c2b1cc] ...
	I0816 05:38:51.428563    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08c19c2b1cc"
	I0816 05:38:51.440862    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:38:51.440872    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:38:51.460251    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:38:51.460265    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:38:51.478501    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:38:51.478511    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:38:54.004699    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:38:59.006818    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:38:59.006922    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:38:59.019074    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:38:59.019153    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:38:59.030582    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:38:59.030658    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:38:59.043939    8654 logs.go:276] 4 containers: [d08c19c2b1cc 4f5615c53c6f e87bc196aca8 fbb13a6d2faf]
	I0816 05:38:59.044025    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:38:59.054962    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:38:59.055030    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:38:59.065556    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:38:59.065631    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:38:59.076488    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:38:59.076556    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:38:59.086865    8654 logs.go:276] 0 containers: []
	W0816 05:38:59.086877    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:38:59.086943    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:38:59.097555    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:38:59.097574    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:38:59.097580    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:38:59.109999    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:38:59.110011    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:38:59.126040    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:38:59.126051    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:38:59.145342    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:38:59.145353    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:38:59.175248    8654 logs.go:123] Gathering logs for coredns [4f5615c53c6f] ...
	I0816 05:38:59.175263    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5615c53c6f"
	I0816 05:38:59.188519    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:38:59.188530    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:38:59.201231    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:38:59.201243    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:38:59.214156    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:38:59.214167    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:38:59.226234    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:38:59.226245    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:38:59.265713    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:38:59.265735    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:38:59.271044    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:38:59.271061    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:38:59.286063    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:38:59.286075    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:38:59.301364    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:38:59.301379    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:38:59.314005    8654 logs.go:123] Gathering logs for coredns [d08c19c2b1cc] ...
	I0816 05:38:59.314018    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08c19c2b1cc"
	I0816 05:38:59.327274    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:38:59.327285    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:39:01.868988    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:39:06.871186    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:39:06.871365    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:39:06.884114    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:39:06.884199    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:39:06.894964    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:39:06.895037    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:39:06.905937    8654 logs.go:276] 4 containers: [d08c19c2b1cc 4f5615c53c6f e87bc196aca8 fbb13a6d2faf]
	I0816 05:39:06.906009    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:39:06.919201    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:39:06.919273    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:39:06.929786    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:39:06.929859    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:39:06.940768    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:39:06.940836    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:39:06.950783    8654 logs.go:276] 0 containers: []
	W0816 05:39:06.950798    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:39:06.950852    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:39:06.961624    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:39:06.961647    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:39:06.961653    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:39:06.973188    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:39:06.973202    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:39:07.008316    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:39:07.008327    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:39:07.022239    8654 logs.go:123] Gathering logs for coredns [d08c19c2b1cc] ...
	I0816 05:39:07.022250    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08c19c2b1cc"
	I0816 05:39:07.033833    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:39:07.033844    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:39:07.050594    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:39:07.050607    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:39:07.075156    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:39:07.075166    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:39:07.111737    8654 logs.go:123] Gathering logs for coredns [4f5615c53c6f] ...
	I0816 05:39:07.111746    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5615c53c6f"
	I0816 05:39:07.123705    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:39:07.123715    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:39:07.135489    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:39:07.135501    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:39:07.149954    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:39:07.149969    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:39:07.161817    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:39:07.161828    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:39:07.174176    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:39:07.174187    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:39:07.178795    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:39:07.178804    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:39:07.193993    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:39:07.194003    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:39:09.708049    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:39:14.710249    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:39:14.710342    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:39:14.721814    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:39:14.721892    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:39:14.734176    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:39:14.734250    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:39:14.745865    8654 logs.go:276] 4 containers: [d08c19c2b1cc 4f5615c53c6f e87bc196aca8 fbb13a6d2faf]
	I0816 05:39:14.745937    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:39:14.756263    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:39:14.756334    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:39:14.767335    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:39:14.767409    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:39:14.777852    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:39:14.777917    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:39:14.795589    8654 logs.go:276] 0 containers: []
	W0816 05:39:14.795603    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:39:14.795665    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:39:14.807533    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:39:14.807551    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:39:14.807556    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:39:14.824316    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:39:14.824327    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:39:14.836583    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:39:14.836593    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:39:14.875267    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:39:14.875279    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:39:14.887204    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:39:14.887216    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:39:14.899626    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:39:14.899636    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:39:14.917601    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:39:14.917611    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:39:14.933919    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:39:14.933928    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:39:14.948327    8654 logs.go:123] Gathering logs for coredns [d08c19c2b1cc] ...
	I0816 05:39:14.948338    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08c19c2b1cc"
	I0816 05:39:14.960530    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:39:14.960541    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:39:14.974462    8654 logs.go:123] Gathering logs for coredns [4f5615c53c6f] ...
	I0816 05:39:14.974476    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5615c53c6f"
	I0816 05:39:14.985850    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:39:14.985861    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:39:14.997691    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:39:14.997701    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:39:15.022537    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:39:15.022547    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:39:15.058988    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:39:15.059001    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:39:17.565474    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:39:22.567627    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:39:22.567831    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:39:22.586809    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:39:22.586889    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:39:22.601562    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:39:22.601643    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:39:22.616661    8654 logs.go:276] 4 containers: [d08c19c2b1cc 4f5615c53c6f e87bc196aca8 fbb13a6d2faf]
	I0816 05:39:22.616742    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:39:22.627735    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:39:22.627808    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:39:22.638205    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:39:22.638271    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:39:22.648526    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:39:22.648591    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:39:22.658701    8654 logs.go:276] 0 containers: []
	W0816 05:39:22.658716    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:39:22.658777    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:39:22.669375    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:39:22.669394    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:39:22.669400    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:39:22.683716    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:39:22.683726    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:39:22.697256    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:39:22.697267    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:39:22.732298    8654 logs.go:123] Gathering logs for coredns [4f5615c53c6f] ...
	I0816 05:39:22.732310    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5615c53c6f"
	I0816 05:39:22.743888    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:39:22.743901    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:39:22.755725    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:39:22.755737    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:39:22.773823    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:39:22.773834    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:39:22.789819    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:39:22.789830    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:39:22.804463    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:39:22.804474    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:39:22.816357    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:39:22.816367    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:39:22.840804    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:39:22.840814    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:39:22.878334    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:39:22.878344    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:39:22.882783    8654 logs.go:123] Gathering logs for coredns [d08c19c2b1cc] ...
	I0816 05:39:22.882789    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08c19c2b1cc"
	I0816 05:39:22.894411    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:39:22.894421    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:39:22.906221    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:39:22.906231    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:39:25.425833    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:39:30.428374    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:39:30.428569    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:39:30.448292    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:39:30.448381    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:39:30.462191    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:39:30.462272    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:39:30.476088    8654 logs.go:276] 4 containers: [d08c19c2b1cc 4f5615c53c6f e87bc196aca8 fbb13a6d2faf]
	I0816 05:39:30.476160    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:39:30.487003    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:39:30.487079    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:39:30.505229    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:39:30.505295    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:39:30.515572    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:39:30.515641    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:39:30.525999    8654 logs.go:276] 0 containers: []
	W0816 05:39:30.526014    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:39:30.526073    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:39:30.546808    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:39:30.546823    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:39:30.546828    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:39:30.558636    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:39:30.558648    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:39:30.576207    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:39:30.576217    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:39:30.600338    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:39:30.600347    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:39:30.605177    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:39:30.605186    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:39:30.619305    8654 logs.go:123] Gathering logs for coredns [d08c19c2b1cc] ...
	I0816 05:39:30.619318    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08c19c2b1cc"
	I0816 05:39:30.642000    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:39:30.642009    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:39:30.656881    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:39:30.656892    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:39:30.691844    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:39:30.691854    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:39:30.705664    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:39:30.705675    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:39:30.723331    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:39:30.723342    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:39:30.735096    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:39:30.735105    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:39:30.746780    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:39:30.746792    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:39:30.784006    8654 logs.go:123] Gathering logs for coredns [4f5615c53c6f] ...
	I0816 05:39:30.784018    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5615c53c6f"
	I0816 05:39:30.795172    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:39:30.795185    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:39:33.309471    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:39:38.311616    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:39:38.316405    8654 out.go:201] 
	W0816 05:39:38.317983    8654 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0816 05:39:38.317988    8654 out.go:270] * 
	* 
	W0816 05:39:38.318413    8654 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:39:38.328334    8654 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-607000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-08-16 05:39:38.433628 -0700 PDT m=+1215.929406960
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-607000 -n running-upgrade-607000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-607000 -n running-upgrade-607000: exit status 2 (15.712285083s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-607000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-403000          | force-systemd-flag-403000 | jenkins | v1.33.1 | 16 Aug 24 05:29 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-384000              | force-systemd-env-384000  | jenkins | v1.33.1 | 16 Aug 24 05:29 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-384000           | force-systemd-env-384000  | jenkins | v1.33.1 | 16 Aug 24 05:29 PDT | 16 Aug 24 05:29 PDT |
	| start   | -p docker-flags-193000                | docker-flags-193000       | jenkins | v1.33.1 | 16 Aug 24 05:29 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-403000             | force-systemd-flag-403000 | jenkins | v1.33.1 | 16 Aug 24 05:29 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-403000          | force-systemd-flag-403000 | jenkins | v1.33.1 | 16 Aug 24 05:29 PDT | 16 Aug 24 05:29 PDT |
	| start   | -p cert-expiration-169000             | cert-expiration-169000    | jenkins | v1.33.1 | 16 Aug 24 05:29 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-193000 ssh               | docker-flags-193000       | jenkins | v1.33.1 | 16 Aug 24 05:29 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-193000 ssh               | docker-flags-193000       | jenkins | v1.33.1 | 16 Aug 24 05:29 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-193000                | docker-flags-193000       | jenkins | v1.33.1 | 16 Aug 24 05:30 PDT | 16 Aug 24 05:30 PDT |
	| start   | -p cert-options-804000                | cert-options-804000       | jenkins | v1.33.1 | 16 Aug 24 05:30 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-804000 ssh               | cert-options-804000       | jenkins | v1.33.1 | 16 Aug 24 05:30 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-804000 -- sudo        | cert-options-804000       | jenkins | v1.33.1 | 16 Aug 24 05:30 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-804000                | cert-options-804000       | jenkins | v1.33.1 | 16 Aug 24 05:30 PDT | 16 Aug 24 05:30 PDT |
	| start   | -p running-upgrade-607000             | minikube                  | jenkins | v1.26.0 | 16 Aug 24 05:30 PDT | 16 Aug 24 05:31 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-607000             | running-upgrade-607000    | jenkins | v1.33.1 | 16 Aug 24 05:31 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-169000             | cert-expiration-169000    | jenkins | v1.33.1 | 16 Aug 24 05:33 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-169000             | cert-expiration-169000    | jenkins | v1.33.1 | 16 Aug 24 05:33 PDT | 16 Aug 24 05:33 PDT |
	| start   | -p kubernetes-upgrade-604000          | kubernetes-upgrade-604000 | jenkins | v1.33.1 | 16 Aug 24 05:33 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-604000          | kubernetes-upgrade-604000 | jenkins | v1.33.1 | 16 Aug 24 05:33 PDT | 16 Aug 24 05:33 PDT |
	| start   | -p kubernetes-upgrade-604000          | kubernetes-upgrade-604000 | jenkins | v1.33.1 | 16 Aug 24 05:33 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-604000          | kubernetes-upgrade-604000 | jenkins | v1.33.1 | 16 Aug 24 05:33 PDT | 16 Aug 24 05:33 PDT |
	| start   | -p stopped-upgrade-972000             | minikube                  | jenkins | v1.26.0 | 16 Aug 24 05:33 PDT | 16 Aug 24 05:34 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-972000 stop           | minikube                  | jenkins | v1.26.0 | 16 Aug 24 05:34 PDT | 16 Aug 24 05:34 PDT |
	| start   | -p stopped-upgrade-972000             | stopped-upgrade-972000    | jenkins | v1.33.1 | 16 Aug 24 05:34 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 05:34:23
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 05:34:23.166240    8876 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:34:23.166411    8876 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:34:23.166415    8876 out.go:358] Setting ErrFile to fd 2...
	I0816 05:34:23.166418    8876 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:34:23.166557    8876 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:34:23.167778    8876 out.go:352] Setting JSON to false
	I0816 05:34:23.187568    8876 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5632,"bootTime":1723806031,"procs":504,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0816 05:34:23.187637    8876 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:34:23.192794    8876 out.go:177] * [stopped-upgrade-972000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 05:34:23.199787    8876 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:34:23.199861    8876 notify.go:220] Checking for updates...
	I0816 05:34:23.207747    8876 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:34:23.210776    8876 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 05:34:23.213830    8876 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:34:23.216727    8876 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	I0816 05:34:23.219725    8876 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 05:34:23.223145    8876 config.go:182] Loaded profile config "stopped-upgrade-972000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0816 05:34:23.224753    8876 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0816 05:34:23.227765    8876 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:34:23.230758    8876 out.go:177] * Using the qemu2 driver based on existing profile
	I0816 05:34:23.238917    8876 start.go:297] selected driver: qemu2
	I0816 05:34:23.238924    8876 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51397 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0816 05:34:23.238974    8876 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:34:23.241509    8876 cni.go:84] Creating CNI manager for ""
	I0816 05:34:23.241526    8876 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 05:34:23.241561    8876 start.go:340] cluster config:
	{Name:stopped-upgrade-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51397 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0816 05:34:23.241622    8876 iso.go:125] acquiring lock: {Name:mkee7fdae783c25a15c40888f5bdc01a171155d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:34:23.250739    8876 out.go:177] * Starting "stopped-upgrade-972000" primary control-plane node in "stopped-upgrade-972000" cluster
	I0816 05:34:23.254751    8876 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0816 05:34:23.254769    8876 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0816 05:34:23.254774    8876 cache.go:56] Caching tarball of preloaded images
	I0816 05:34:23.254831    8876 preload.go:172] Found /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 05:34:23.254836    8876 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0816 05:34:23.254890    8876 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000/config.json ...
	I0816 05:34:23.255353    8876 start.go:360] acquireMachinesLock for stopped-upgrade-972000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:34:23.255392    8876 start.go:364] duration metric: took 30.458µs to acquireMachinesLock for "stopped-upgrade-972000"
	I0816 05:34:23.255402    8876 start.go:96] Skipping create...Using existing machine configuration
	I0816 05:34:23.255407    8876 fix.go:54] fixHost starting: 
	I0816 05:34:23.255524    8876 fix.go:112] recreateIfNeeded on stopped-upgrade-972000: state=Stopped err=<nil>
	W0816 05:34:23.255533    8876 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 05:34:23.262749    8876 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-972000" ...
	I0816 05:34:20.843393    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:34:23.266761    8876 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:34:23.266825    8876 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/stopped-upgrade-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/stopped-upgrade-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/stopped-upgrade-972000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51362-:22,hostfwd=tcp::51363-:2376,hostname=stopped-upgrade-972000 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/stopped-upgrade-972000/disk.qcow2
	I0816 05:34:23.313759    8876 main.go:141] libmachine: STDOUT: 
	I0816 05:34:23.313791    8876 main.go:141] libmachine: STDERR: 
	I0816 05:34:23.313796    8876 main.go:141] libmachine: Waiting for VM to start (ssh -p 51362 docker@127.0.0.1)...
	I0816 05:34:25.846297    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:34:25.846927    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:34:25.889888    8654 logs.go:276] 2 containers: [1c1df0a24283 7da996bebe3e]
	I0816 05:34:25.890027    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:34:25.911384    8654 logs.go:276] 2 containers: [908e9b841803 c5598fa8291b]
	I0816 05:34:25.911499    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:34:25.926780    8654 logs.go:276] 1 containers: [f86c0ca08a29]
	I0816 05:34:25.926859    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:34:25.938911    8654 logs.go:276] 2 containers: [82a7160cf6b3 be9ff0533784]
	I0816 05:34:25.938983    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:34:25.959005    8654 logs.go:276] 1 containers: [41826d2a89be]
	I0816 05:34:25.959083    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:34:25.978874    8654 logs.go:276] 2 containers: [09e3f6eaf95c 258b4e54effd]
	I0816 05:34:25.978956    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:34:25.994223    8654 logs.go:276] 0 containers: []
	W0816 05:34:25.994238    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:34:25.994297    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:34:26.009841    8654 logs.go:276] 2 containers: [da3ee567efaa e4a387b28249]
	I0816 05:34:26.009861    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:34:26.009867    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:34:26.051998    8654 logs.go:123] Gathering logs for kube-controller-manager [09e3f6eaf95c] ...
	I0816 05:34:26.052011    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e3f6eaf95c"
	I0816 05:34:26.070403    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:34:26.070414    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:34:26.081974    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:34:26.081988    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:34:26.086508    8654 logs.go:123] Gathering logs for coredns [f86c0ca08a29] ...
	I0816 05:34:26.086518    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86c0ca08a29"
	I0816 05:34:26.102979    8654 logs.go:123] Gathering logs for storage-provisioner [e4a387b28249] ...
	I0816 05:34:26.102991    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a387b28249"
	I0816 05:34:26.115086    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:34:26.115096    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:34:26.139631    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:34:26.139642    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:34:26.183534    8654 logs.go:123] Gathering logs for etcd [c5598fa8291b] ...
	I0816 05:34:26.183550    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5598fa8291b"
	I0816 05:34:26.194501    8654 logs.go:123] Gathering logs for kube-scheduler [82a7160cf6b3] ...
	I0816 05:34:26.194521    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a7160cf6b3"
	I0816 05:34:26.206121    8654 logs.go:123] Gathering logs for kube-proxy [41826d2a89be] ...
	I0816 05:34:26.206132    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41826d2a89be"
	I0816 05:34:26.227375    8654 logs.go:123] Gathering logs for kube-controller-manager [258b4e54effd] ...
	I0816 05:34:26.227387    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258b4e54effd"
	I0816 05:34:26.244096    8654 logs.go:123] Gathering logs for etcd [908e9b841803] ...
	I0816 05:34:26.244108    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 908e9b841803"
	I0816 05:34:26.262316    8654 logs.go:123] Gathering logs for kube-apiserver [7da996bebe3e] ...
	I0816 05:34:26.262328    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da996bebe3e"
	I0816 05:34:26.273416    8654 logs.go:123] Gathering logs for kube-scheduler [be9ff0533784] ...
	I0816 05:34:26.273430    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be9ff0533784"
	I0816 05:34:26.287873    8654 logs.go:123] Gathering logs for storage-provisioner [da3ee567efaa] ...
	I0816 05:34:26.287883    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da3ee567efaa"
	I0816 05:34:26.299168    8654 logs.go:123] Gathering logs for kube-apiserver [1c1df0a24283] ...
	I0816 05:34:26.299181    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c1df0a24283"
	I0816 05:34:28.815702    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:34:33.816180    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:34:33.816374    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:34:33.828720    8654 logs.go:276] 2 containers: [1c1df0a24283 7da996bebe3e]
	I0816 05:34:33.828799    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:34:33.841046    8654 logs.go:276] 2 containers: [908e9b841803 c5598fa8291b]
	I0816 05:34:33.841119    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:34:33.853287    8654 logs.go:276] 1 containers: [f86c0ca08a29]
	I0816 05:34:33.853357    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:34:33.872270    8654 logs.go:276] 2 containers: [82a7160cf6b3 be9ff0533784]
	I0816 05:34:33.872364    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:34:33.890144    8654 logs.go:276] 1 containers: [41826d2a89be]
	I0816 05:34:33.890218    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:34:33.903461    8654 logs.go:276] 2 containers: [09e3f6eaf95c 258b4e54effd]
	I0816 05:34:33.903531    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:34:33.915837    8654 logs.go:276] 0 containers: []
	W0816 05:34:33.915849    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:34:33.915908    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:34:33.929862    8654 logs.go:276] 2 containers: [da3ee567efaa e4a387b28249]
	I0816 05:34:33.929881    8654 logs.go:123] Gathering logs for storage-provisioner [da3ee567efaa] ...
	I0816 05:34:33.929889    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da3ee567efaa"
	I0816 05:34:33.942544    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:34:33.942558    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:34:33.980886    8654 logs.go:123] Gathering logs for etcd [c5598fa8291b] ...
	I0816 05:34:33.980896    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5598fa8291b"
	I0816 05:34:33.993046    8654 logs.go:123] Gathering logs for kube-scheduler [be9ff0533784] ...
	I0816 05:34:33.993059    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be9ff0533784"
	I0816 05:34:34.008290    8654 logs.go:123] Gathering logs for kube-controller-manager [09e3f6eaf95c] ...
	I0816 05:34:34.008301    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e3f6eaf95c"
	I0816 05:34:34.025617    8654 logs.go:123] Gathering logs for kube-controller-manager [258b4e54effd] ...
	I0816 05:34:34.025627    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258b4e54effd"
	I0816 05:34:34.039233    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:34:34.039244    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:34:34.043746    8654 logs.go:123] Gathering logs for coredns [f86c0ca08a29] ...
	I0816 05:34:34.043753    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86c0ca08a29"
	I0816 05:34:34.057888    8654 logs.go:123] Gathering logs for kube-apiserver [7da996bebe3e] ...
	I0816 05:34:34.057900    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da996bebe3e"
	I0816 05:34:34.071590    8654 logs.go:123] Gathering logs for etcd [908e9b841803] ...
	I0816 05:34:34.071604    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 908e9b841803"
	I0816 05:34:34.085184    8654 logs.go:123] Gathering logs for kube-scheduler [82a7160cf6b3] ...
	I0816 05:34:34.085195    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a7160cf6b3"
	I0816 05:34:34.097707    8654 logs.go:123] Gathering logs for kube-proxy [41826d2a89be] ...
	I0816 05:34:34.097717    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41826d2a89be"
	I0816 05:34:34.109698    8654 logs.go:123] Gathering logs for storage-provisioner [e4a387b28249] ...
	I0816 05:34:34.109711    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a387b28249"
	I0816 05:34:34.121647    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:34:34.121656    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:34:34.145607    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:34:34.145617    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:34:34.189270    8654 logs.go:123] Gathering logs for kube-apiserver [1c1df0a24283] ...
	I0816 05:34:34.189293    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c1df0a24283"
	I0816 05:34:34.205936    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:34:34.205957    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:34:36.720545    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:34:43.157488    8876 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000/config.json ...
	I0816 05:34:43.157764    8876 machine.go:93] provisionDockerMachine start ...
	I0816 05:34:43.157880    8876 main.go:141] libmachine: Using SSH client type: native
	I0816 05:34:43.158064    8876 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10089c5a0] 0x10089ee00 <nil>  [] 0s} localhost 51362 <nil> <nil>}
	I0816 05:34:43.158070    8876 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 05:34:41.723152    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:34:41.723568    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:34:41.762694    8654 logs.go:276] 2 containers: [1c1df0a24283 7da996bebe3e]
	I0816 05:34:41.762840    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:34:41.784299    8654 logs.go:276] 2 containers: [908e9b841803 c5598fa8291b]
	I0816 05:34:41.784407    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:34:41.799029    8654 logs.go:276] 1 containers: [f86c0ca08a29]
	I0816 05:34:41.799105    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:34:41.811687    8654 logs.go:276] 2 containers: [82a7160cf6b3 be9ff0533784]
	I0816 05:34:41.811766    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:34:41.822404    8654 logs.go:276] 1 containers: [41826d2a89be]
	I0816 05:34:41.822465    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:34:41.833543    8654 logs.go:276] 2 containers: [09e3f6eaf95c 258b4e54effd]
	I0816 05:34:41.833606    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:34:41.844547    8654 logs.go:276] 0 containers: []
	W0816 05:34:41.844559    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:34:41.844622    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:34:41.855664    8654 logs.go:276] 2 containers: [da3ee567efaa e4a387b28249]
	I0816 05:34:41.855680    8654 logs.go:123] Gathering logs for etcd [c5598fa8291b] ...
	I0816 05:34:41.855686    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5598fa8291b"
	I0816 05:34:41.867255    8654 logs.go:123] Gathering logs for kube-controller-manager [09e3f6eaf95c] ...
	I0816 05:34:41.867268    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e3f6eaf95c"
	I0816 05:34:41.886434    8654 logs.go:123] Gathering logs for storage-provisioner [da3ee567efaa] ...
	I0816 05:34:41.886446    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da3ee567efaa"
	I0816 05:34:41.903125    8654 logs.go:123] Gathering logs for storage-provisioner [e4a387b28249] ...
	I0816 05:34:41.903137    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a387b28249"
	I0816 05:34:41.914343    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:34:41.914352    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:34:41.938443    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:34:41.938453    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:34:41.949902    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:34:41.949918    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:34:41.992115    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:34:41.992125    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:34:41.996332    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:34:41.996338    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:34:42.030722    8654 logs.go:123] Gathering logs for kube-scheduler [82a7160cf6b3] ...
	I0816 05:34:42.030732    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a7160cf6b3"
	I0816 05:34:42.042628    8654 logs.go:123] Gathering logs for kube-apiserver [1c1df0a24283] ...
	I0816 05:34:42.042642    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c1df0a24283"
	I0816 05:34:42.057353    8654 logs.go:123] Gathering logs for coredns [f86c0ca08a29] ...
	I0816 05:34:42.057366    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86c0ca08a29"
	I0816 05:34:42.075362    8654 logs.go:123] Gathering logs for kube-proxy [41826d2a89be] ...
	I0816 05:34:42.075392    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41826d2a89be"
	I0816 05:34:42.091149    8654 logs.go:123] Gathering logs for kube-apiserver [7da996bebe3e] ...
	I0816 05:34:42.091160    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da996bebe3e"
	I0816 05:34:42.104847    8654 logs.go:123] Gathering logs for etcd [908e9b841803] ...
	I0816 05:34:42.104864    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 908e9b841803"
	I0816 05:34:42.118884    8654 logs.go:123] Gathering logs for kube-scheduler [be9ff0533784] ...
	I0816 05:34:42.118898    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be9ff0533784"
	I0816 05:34:42.133606    8654 logs.go:123] Gathering logs for kube-controller-manager [258b4e54effd] ...
	I0816 05:34:42.133619    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258b4e54effd"
	I0816 05:34:43.221670    8876 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 05:34:43.221688    8876 buildroot.go:166] provisioning hostname "stopped-upgrade-972000"
	I0816 05:34:43.221752    8876 main.go:141] libmachine: Using SSH client type: native
	I0816 05:34:43.221892    8876 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10089c5a0] 0x10089ee00 <nil>  [] 0s} localhost 51362 <nil> <nil>}
	I0816 05:34:43.221900    8876 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-972000 && echo "stopped-upgrade-972000" | sudo tee /etc/hostname
	I0816 05:34:43.288657    8876 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-972000
	
	I0816 05:34:43.288718    8876 main.go:141] libmachine: Using SSH client type: native
	I0816 05:34:43.288870    8876 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10089c5a0] 0x10089ee00 <nil>  [] 0s} localhost 51362 <nil> <nil>}
	I0816 05:34:43.288883    8876 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-972000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-972000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-972000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 05:34:43.353581    8876 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 05:34:43.353592    8876 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-6249/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-6249/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-6249/.minikube}
	I0816 05:34:43.353606    8876 buildroot.go:174] setting up certificates
	I0816 05:34:43.353611    8876 provision.go:84] configureAuth start
	I0816 05:34:43.353621    8876 provision.go:143] copyHostCerts
	I0816 05:34:43.353698    8876 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-6249/.minikube/ca.pem, removing ...
	I0816 05:34:43.353705    8876 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-6249/.minikube/ca.pem
	I0816 05:34:43.353942    8876 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-6249/.minikube/ca.pem (1082 bytes)
	I0816 05:34:43.354151    8876 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-6249/.minikube/cert.pem, removing ...
	I0816 05:34:43.354155    8876 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-6249/.minikube/cert.pem
	I0816 05:34:43.354218    8876 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-6249/.minikube/cert.pem (1123 bytes)
	I0816 05:34:43.354347    8876 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-6249/.minikube/key.pem, removing ...
	I0816 05:34:43.354351    8876 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-6249/.minikube/key.pem
	I0816 05:34:43.354406    8876 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-6249/.minikube/key.pem (1679 bytes)
	I0816 05:34:43.354504    8876 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-972000 san=[127.0.0.1 localhost minikube stopped-upgrade-972000]
	I0816 05:34:43.450834    8876 provision.go:177] copyRemoteCerts
	I0816 05:34:43.450866    8876 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 05:34:43.450875    8876 sshutil.go:53] new ssh client: &{IP:localhost Port:51362 SSHKeyPath:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/stopped-upgrade-972000/id_rsa Username:docker}
	I0816 05:34:43.485452    8876 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 05:34:43.492245    8876 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0816 05:34:43.498956    8876 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 05:34:43.505983    8876 provision.go:87] duration metric: took 152.363208ms to configureAuth
	I0816 05:34:43.505995    8876 buildroot.go:189] setting minikube options for container-runtime
	I0816 05:34:43.506108    8876 config.go:182] Loaded profile config "stopped-upgrade-972000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0816 05:34:43.506143    8876 main.go:141] libmachine: Using SSH client type: native
	I0816 05:34:43.506228    8876 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10089c5a0] 0x10089ee00 <nil>  [] 0s} localhost 51362 <nil> <nil>}
	I0816 05:34:43.506235    8876 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 05:34:43.566747    8876 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 05:34:43.566757    8876 buildroot.go:70] root file system type: tmpfs
	I0816 05:34:43.566809    8876 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 05:34:43.566857    8876 main.go:141] libmachine: Using SSH client type: native
	I0816 05:34:43.566967    8876 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10089c5a0] 0x10089ee00 <nil>  [] 0s} localhost 51362 <nil> <nil>}
	I0816 05:34:43.567003    8876 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 05:34:43.633449    8876 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 05:34:43.633500    8876 main.go:141] libmachine: Using SSH client type: native
	I0816 05:34:43.633609    8876 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10089c5a0] 0x10089ee00 <nil>  [] 0s} localhost 51362 <nil> <nil>}
	I0816 05:34:43.633620    8876 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 05:34:43.997513    8876 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 05:34:43.997525    8876 machine.go:96] duration metric: took 839.768208ms to provisionDockerMachine
	I0816 05:34:43.997536    8876 start.go:293] postStartSetup for "stopped-upgrade-972000" (driver="qemu2")
	I0816 05:34:43.997542    8876 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 05:34:43.997614    8876 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 05:34:43.997624    8876 sshutil.go:53] new ssh client: &{IP:localhost Port:51362 SSHKeyPath:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/stopped-upgrade-972000/id_rsa Username:docker}
	I0816 05:34:44.029522    8876 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 05:34:44.030859    8876 info.go:137] Remote host: Buildroot 2021.02.12
	I0816 05:34:44.030866    8876 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-6249/.minikube/addons for local assets ...
	I0816 05:34:44.030951    8876 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-6249/.minikube/files for local assets ...
	I0816 05:34:44.031069    8876 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-6249/.minikube/files/etc/ssl/certs/67462.pem -> 67462.pem in /etc/ssl/certs
	I0816 05:34:44.031196    8876 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 05:34:44.034051    8876 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/files/etc/ssl/certs/67462.pem --> /etc/ssl/certs/67462.pem (1708 bytes)
	I0816 05:34:44.041303    8876 start.go:296] duration metric: took 43.761333ms for postStartSetup
	I0816 05:34:44.041318    8876 fix.go:56] duration metric: took 20.786254541s for fixHost
	I0816 05:34:44.041353    8876 main.go:141] libmachine: Using SSH client type: native
	I0816 05:34:44.041455    8876 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10089c5a0] 0x10089ee00 <nil>  [] 0s} localhost 51362 <nil> <nil>}
	I0816 05:34:44.041460    8876 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 05:34:44.101151    8876 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723811683.930611296
	
	I0816 05:34:44.101160    8876 fix.go:216] guest clock: 1723811683.930611296
	I0816 05:34:44.101164    8876 fix.go:229] Guest: 2024-08-16 05:34:43.930611296 -0700 PDT Remote: 2024-08-16 05:34:44.041319 -0700 PDT m=+20.904770793 (delta=-110.707704ms)
	I0816 05:34:44.101175    8876 fix.go:200] guest clock delta is within tolerance: -110.707704ms
	I0816 05:34:44.101182    8876 start.go:83] releasing machines lock for "stopped-upgrade-972000", held for 20.846125166s
	I0816 05:34:44.101251    8876 ssh_runner.go:195] Run: cat /version.json
	I0816 05:34:44.101262    8876 sshutil.go:53] new ssh client: &{IP:localhost Port:51362 SSHKeyPath:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/stopped-upgrade-972000/id_rsa Username:docker}
	I0816 05:34:44.101251    8876 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 05:34:44.101305    8876 sshutil.go:53] new ssh client: &{IP:localhost Port:51362 SSHKeyPath:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/stopped-upgrade-972000/id_rsa Username:docker}
	W0816 05:34:44.102196    8876 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:51483->127.0.0.1:51362: write: broken pipe
	I0816 05:34:44.102213    8876 retry.go:31] will retry after 368.049268ms: ssh: handshake failed: write tcp 127.0.0.1:51483->127.0.0.1:51362: write: broken pipe
	W0816 05:34:44.131802    8876 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0816 05:34:44.131863    8876 ssh_runner.go:195] Run: systemctl --version
	I0816 05:34:44.133712    8876 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 05:34:44.135160    8876 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 05:34:44.135190    8876 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0816 05:34:44.138031    8876 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0816 05:34:44.142923    8876 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 05:34:44.142938    8876 start.go:495] detecting cgroup driver to use...
	I0816 05:34:44.143031    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 05:34:44.150141    8876 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0816 05:34:44.153730    8876 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 05:34:44.157019    8876 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 05:34:44.157051    8876 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 05:34:44.159953    8876 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 05:34:44.162988    8876 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 05:34:44.166074    8876 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 05:34:44.169086    8876 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 05:34:44.171780    8876 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 05:34:44.174735    8876 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 05:34:44.178142    8876 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 05:34:44.181563    8876 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 05:34:44.184530    8876 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 05:34:44.187176    8876 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:34:44.267858    8876 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 05:34:44.278443    8876 start.go:495] detecting cgroup driver to use...
	I0816 05:34:44.278502    8876 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 05:34:44.283686    8876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 05:34:44.289144    8876 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 05:34:44.297432    8876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 05:34:44.301980    8876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 05:34:44.306708    8876 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 05:34:44.362389    8876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 05:34:44.367636    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 05:34:44.372764    8876 ssh_runner.go:195] Run: which cri-dockerd
	I0816 05:34:44.374104    8876 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 05:34:44.376915    8876 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0816 05:34:44.382139    8876 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 05:34:44.470516    8876 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 05:34:44.553539    8876 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 05:34:44.553597    8876 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 05:34:44.559010    8876 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:34:44.644546    8876 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 05:34:45.799713    8876 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.155169416s)
	I0816 05:34:45.799786    8876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0816 05:34:45.805905    8876 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0816 05:34:45.812401    8876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 05:34:45.817868    8876 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0816 05:34:45.898418    8876 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0816 05:34:45.975826    8876 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:34:46.058824    8876 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0816 05:34:46.065429    8876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 05:34:46.070290    8876 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:34:46.170840    8876 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0816 05:34:46.211562    8876 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0816 05:34:46.211657    8876 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0816 05:34:46.214657    8876 start.go:563] Will wait 60s for crictl version
	I0816 05:34:46.214718    8876 ssh_runner.go:195] Run: which crictl
	I0816 05:34:46.216194    8876 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 05:34:46.231421    8876 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0816 05:34:46.231494    8876 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 05:34:46.248322    8876 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 05:34:46.269837    8876 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0816 05:34:46.269905    8876 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0816 05:34:46.271313    8876 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 05:34:46.274964    8876 kubeadm.go:883] updating cluster {Name:stopped-upgrade-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51397 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0816 05:34:46.275008    8876 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0816 05:34:46.275048    8876 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 05:34:46.285343    8876 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0816 05:34:46.285352    8876 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0816 05:34:46.285400    8876 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0816 05:34:46.288467    8876 ssh_runner.go:195] Run: which lz4
	I0816 05:34:46.289711    8876 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 05:34:46.291014    8876 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 05:34:46.291023    8876 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0816 05:34:47.203882    8876 docker.go:649] duration metric: took 914.225583ms to copy over tarball
	I0816 05:34:47.203948    8876 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 05:34:44.645131    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:34:48.385069    8876 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.181113667s)
	I0816 05:34:48.385087    8876 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 05:34:48.400586    8876 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0816 05:34:48.403459    8876 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0816 05:34:48.408431    8876 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:34:48.491824    8876 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 05:34:50.251911    8876 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.760099875s)
	I0816 05:34:50.252007    8876 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 05:34:50.268906    8876 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0816 05:34:50.268917    8876 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0816 05:34:50.268923    8876 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 05:34:50.272910    8876 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 05:34:50.274819    8876 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0816 05:34:50.276529    8876 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0816 05:34:50.276701    8876 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 05:34:50.278311    8876 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0816 05:34:50.278477    8876 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0816 05:34:50.279791    8876 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0816 05:34:50.280220    8876 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0816 05:34:50.281037    8876 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0816 05:34:50.281425    8876 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0816 05:34:50.282398    8876 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0816 05:34:50.282469    8876 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0816 05:34:50.283193    8876 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0816 05:34:50.283542    8876 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0816 05:34:50.284209    8876 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0816 05:34:50.284729    8876 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0816 05:34:50.752021    8876 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0816 05:34:50.752666    8876 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0816 05:34:50.759901    8876 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0816 05:34:50.768507    8876 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0816 05:34:50.768546    8876 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0816 05:34:50.768609    8876 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0816 05:34:50.768623    8876 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0816 05:34:50.768639    8876 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0816 05:34:50.768664    8876 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0816 05:34:50.778017    8876 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0816 05:34:50.784866    8876 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0816 05:34:50.784878    8876 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0816 05:34:50.784884    8876 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0816 05:34:50.784932    8876 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0816 05:34:50.796347    8876 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0816 05:34:50.796365    8876 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0816 05:34:50.796386    8876 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0816 05:34:50.796436    8876 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0816 05:34:50.798224    8876 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0816 05:34:50.801240    8876 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0816 05:34:50.802354    8876 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0816 05:34:50.810330    8876 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	W0816 05:34:50.811332    8876 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0816 05:34:50.811445    8876 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0816 05:34:50.816012    8876 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0816 05:34:50.816034    8876 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0816 05:34:50.816083    8876 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0816 05:34:50.819456    8876 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0816 05:34:50.819475    8876 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0816 05:34:50.819524    8876 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0816 05:34:50.834786    8876 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0816 05:34:50.834943    8876 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0816 05:34:50.834966    8876 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0816 05:34:50.835012    8876 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0816 05:34:50.843902    8876 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0816 05:34:50.844018    8876 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0816 05:34:50.851475    8876 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0816 05:34:50.851504    8876 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0816 05:34:50.851601    8876 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0816 05:34:50.851691    8876 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0816 05:34:50.853200    8876 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0816 05:34:50.853213    8876 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0816 05:34:50.875243    8876 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0816 05:34:50.875257    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0816 05:34:50.916855    8876 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0816 05:34:50.916875    8876 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0816 05:34:50.916893    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0816 05:34:50.953311    8876 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0816 05:34:51.009629    8876 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0816 05:34:51.009755    8876 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 05:34:51.022302    8876 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0816 05:34:51.022326    8876 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 05:34:51.022380    8876 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 05:34:51.037711    8876 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0816 05:34:51.037836    8876 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0816 05:34:51.039353    8876 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0816 05:34:51.039365    8876 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0816 05:34:51.071182    8876 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0816 05:34:51.071203    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0816 05:34:51.311146    8876 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0816 05:34:51.311189    8876 cache_images.go:92] duration metric: took 1.04227675s to LoadCachedImages
	W0816 05:34:51.311236    8876 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0816 05:34:51.311244    8876 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0816 05:34:51.311296    8876 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-972000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 05:34:51.311356    8876 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0816 05:34:51.325126    8876 cni.go:84] Creating CNI manager for ""
	I0816 05:34:51.325137    8876 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 05:34:51.325144    8876 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 05:34:51.325154    8876 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-972000 NodeName:stopped-upgrade-972000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 05:34:51.325235    8876 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-972000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 05:34:51.325285    8876 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0816 05:34:51.328176    8876 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 05:34:51.328206    8876 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 05:34:51.330735    8876 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0816 05:34:51.335544    8876 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 05:34:51.340679    8876 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0816 05:34:51.346176    8876 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0816 05:34:51.347478    8876 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 05:34:51.350946    8876 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:34:51.428081    8876 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 05:34:51.433495    8876 certs.go:68] Setting up /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000 for IP: 10.0.2.15
	I0816 05:34:51.433502    8876 certs.go:194] generating shared ca certs ...
	I0816 05:34:51.433510    8876 certs.go:226] acquiring lock for ca certs: {Name:mk6cf8af742115923453a119a0b968ea241ec803 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:34:51.433677    8876 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19423-6249/.minikube/ca.key
	I0816 05:34:51.433728    8876 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19423-6249/.minikube/proxy-client-ca.key
	I0816 05:34:51.433736    8876 certs.go:256] generating profile certs ...
	I0816 05:34:51.433809    8876 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000/client.key
	I0816 05:34:51.433826    8876 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000/apiserver.key.1ac75644
	I0816 05:34:51.433839    8876 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000/apiserver.crt.1ac75644 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0816 05:34:51.488062    8876 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000/apiserver.crt.1ac75644 ...
	I0816 05:34:51.488074    8876 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000/apiserver.crt.1ac75644: {Name:mkaad8b00746cefd9f64ceee91316d9444dd95e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:34:51.488705    8876 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000/apiserver.key.1ac75644 ...
	I0816 05:34:51.488712    8876 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000/apiserver.key.1ac75644: {Name:mk3df119846dcced9aba850eb0346c334139cbfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:34:51.488882    8876 certs.go:381] copying /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000/apiserver.crt.1ac75644 -> /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000/apiserver.crt
	I0816 05:34:51.489022    8876 certs.go:385] copying /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000/apiserver.key.1ac75644 -> /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000/apiserver.key
	I0816 05:34:51.489178    8876 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000/proxy-client.key
	I0816 05:34:51.489307    8876 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/6746.pem (1338 bytes)
	W0816 05:34:51.489337    8876 certs.go:480] ignoring /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/6746_empty.pem, impossibly tiny 0 bytes
	I0816 05:34:51.489346    8876 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 05:34:51.489365    8876 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem (1082 bytes)
	I0816 05:34:51.489383    8876 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem (1123 bytes)
	I0816 05:34:51.489403    8876 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/key.pem (1679 bytes)
	I0816 05:34:51.489448    8876 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-6249/.minikube/files/etc/ssl/certs/67462.pem (1708 bytes)
	I0816 05:34:51.489780    8876 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 05:34:51.496600    8876 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 05:34:51.502904    8876 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 05:34:51.510032    8876 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 05:34:51.516895    8876 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0816 05:34:51.523752    8876 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 05:34:51.530762    8876 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 05:34:51.538022    8876 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 05:34:51.545487    8876 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 05:34:51.552558    8876 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/6746.pem --> /usr/share/ca-certificates/6746.pem (1338 bytes)
	I0816 05:34:51.559218    8876 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/files/etc/ssl/certs/67462.pem --> /usr/share/ca-certificates/67462.pem (1708 bytes)
	I0816 05:34:51.566492    8876 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 05:34:51.571720    8876 ssh_runner.go:195] Run: openssl version
	I0816 05:34:51.573546    8876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 05:34:51.576719    8876 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 05:34:51.578222    8876 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:30 /usr/share/ca-certificates/minikubeCA.pem
	I0816 05:34:51.578243    8876 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 05:34:51.580168    8876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 05:34:51.583074    8876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6746.pem && ln -fs /usr/share/ca-certificates/6746.pem /etc/ssl/certs/6746.pem"
	I0816 05:34:51.586574    8876 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6746.pem
	I0816 05:34:51.587986    8876 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:20 /usr/share/ca-certificates/6746.pem
	I0816 05:34:51.588005    8876 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6746.pem
	I0816 05:34:51.589814    8876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6746.pem /etc/ssl/certs/51391683.0"
	I0816 05:34:51.592960    8876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67462.pem && ln -fs /usr/share/ca-certificates/67462.pem /etc/ssl/certs/67462.pem"
	I0816 05:34:51.595840    8876 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67462.pem
	I0816 05:34:51.597266    8876 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:20 /usr/share/ca-certificates/67462.pem
	I0816 05:34:51.597286    8876 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67462.pem
	I0816 05:34:51.598985    8876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67462.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 05:34:51.602377    8876 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 05:34:51.603797    8876 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 05:34:51.606051    8876 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 05:34:51.608155    8876 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 05:34:51.610132    8876 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 05:34:51.612077    8876 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 05:34:51.613793    8876 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 05:34:51.615676    8876 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51397 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0816 05:34:51.615745    8876 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0816 05:34:51.626359    8876 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 05:34:51.629899    8876 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 05:34:51.629905    8876 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 05:34:51.629929    8876 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 05:34:51.632699    8876 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 05:34:51.632978    8876 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-972000" does not appear in /Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:34:51.633068    8876 kubeconfig.go:62] /Users/jenkins/minikube-integration/19423-6249/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-972000" cluster setting kubeconfig missing "stopped-upgrade-972000" context setting]
	I0816 05:34:51.633280    8876 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-6249/kubeconfig: {Name:mka7b2a1dac03f0ea4ac28563b4fe884a2b1b206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:34:51.633717    8876 kapi.go:59] client config for stopped-upgrade-972000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000/client.key", CAFile:"/Users/jenkins/minikube-integration/19423-6249/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101e55610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 05:34:51.634049    8876 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 05:34:51.636608    8876 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-972000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0816 05:34:51.636614    8876 kubeadm.go:1160] stopping kube-system containers ...
	I0816 05:34:51.636656    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0816 05:34:51.647351    8876 docker.go:483] Stopping containers: [d49ec1605243 02153e39f839 a54c050fa5fd d464a7742a93 753544007c33 fdf37f08503a a3b3052a7b8a e3381be358f6]
	I0816 05:34:51.647424    8876 ssh_runner.go:195] Run: docker stop d49ec1605243 02153e39f839 a54c050fa5fd d464a7742a93 753544007c33 fdf37f08503a a3b3052a7b8a e3381be358f6
	I0816 05:34:51.658407    8876 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 05:34:51.664163    8876 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 05:34:51.666846    8876 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 05:34:51.666853    8876 kubeadm.go:157] found existing configuration files:
	
	I0816 05:34:51.666873    8876 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51397 /etc/kubernetes/admin.conf
	I0816 05:34:51.669762    8876 kubeadm.go:163] "https://control-plane.minikube.internal:51397" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51397 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 05:34:51.669794    8876 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 05:34:51.672452    8876 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51397 /etc/kubernetes/kubelet.conf
	I0816 05:34:51.674866    8876 kubeadm.go:163] "https://control-plane.minikube.internal:51397" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51397 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 05:34:51.674885    8876 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 05:34:51.678050    8876 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51397 /etc/kubernetes/controller-manager.conf
	I0816 05:34:51.680874    8876 kubeadm.go:163] "https://control-plane.minikube.internal:51397" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51397 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 05:34:51.680893    8876 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 05:34:51.683426    8876 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51397 /etc/kubernetes/scheduler.conf
	I0816 05:34:51.686161    8876 kubeadm.go:163] "https://control-plane.minikube.internal:51397" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51397 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 05:34:51.686186    8876 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 05:34:51.689151    8876 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 05:34:51.691930    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 05:34:51.716608    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 05:34:52.433962    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 05:34:52.570851    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 05:34:52.602858    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 05:34:52.624166    8876 api_server.go:52] waiting for apiserver process to appear ...
	I0816 05:34:52.624245    8876 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 05:34:53.125192    8876 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 05:34:49.647209    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:34:49.647311    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:34:49.658329    8654 logs.go:276] 2 containers: [1c1df0a24283 7da996bebe3e]
	I0816 05:34:49.658400    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:34:49.669186    8654 logs.go:276] 2 containers: [908e9b841803 c5598fa8291b]
	I0816 05:34:49.669256    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:34:49.679769    8654 logs.go:276] 1 containers: [f86c0ca08a29]
	I0816 05:34:49.679844    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:34:49.690232    8654 logs.go:276] 2 containers: [82a7160cf6b3 be9ff0533784]
	I0816 05:34:49.690308    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:34:49.701303    8654 logs.go:276] 1 containers: [41826d2a89be]
	I0816 05:34:49.701420    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:34:49.712547    8654 logs.go:276] 2 containers: [09e3f6eaf95c 258b4e54effd]
	I0816 05:34:49.712630    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:34:49.722613    8654 logs.go:276] 0 containers: []
	W0816 05:34:49.722628    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:34:49.722690    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:34:49.733402    8654 logs.go:276] 2 containers: [da3ee567efaa e4a387b28249]
	I0816 05:34:49.733421    8654 logs.go:123] Gathering logs for kube-scheduler [82a7160cf6b3] ...
	I0816 05:34:49.733430    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a7160cf6b3"
	I0816 05:34:49.745535    8654 logs.go:123] Gathering logs for storage-provisioner [da3ee567efaa] ...
	I0816 05:34:49.745546    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da3ee567efaa"
	I0816 05:34:49.757413    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:34:49.757426    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:34:49.782072    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:34:49.782080    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:34:49.817710    8654 logs.go:123] Gathering logs for kube-apiserver [7da996bebe3e] ...
	I0816 05:34:49.817722    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da996bebe3e"
	I0816 05:34:49.829385    8654 logs.go:123] Gathering logs for etcd [908e9b841803] ...
	I0816 05:34:49.829397    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 908e9b841803"
	I0816 05:34:49.843692    8654 logs.go:123] Gathering logs for kube-proxy [41826d2a89be] ...
	I0816 05:34:49.843707    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41826d2a89be"
	I0816 05:34:49.855360    8654 logs.go:123] Gathering logs for kube-apiserver [1c1df0a24283] ...
	I0816 05:34:49.855371    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c1df0a24283"
	I0816 05:34:49.869660    8654 logs.go:123] Gathering logs for coredns [f86c0ca08a29] ...
	I0816 05:34:49.869672    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86c0ca08a29"
	I0816 05:34:49.894468    8654 logs.go:123] Gathering logs for kube-scheduler [be9ff0533784] ...
	I0816 05:34:49.894480    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be9ff0533784"
	I0816 05:34:49.909597    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:34:49.909607    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:34:49.914115    8654 logs.go:123] Gathering logs for kube-controller-manager [09e3f6eaf95c] ...
	I0816 05:34:49.914122    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e3f6eaf95c"
	I0816 05:34:49.932844    8654 logs.go:123] Gathering logs for kube-controller-manager [258b4e54effd] ...
	I0816 05:34:49.932860    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258b4e54effd"
	I0816 05:34:49.945191    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:34:49.945201    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:34:49.957217    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:34:49.957234    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:34:50.000710    8654 logs.go:123] Gathering logs for etcd [c5598fa8291b] ...
	I0816 05:34:50.000720    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5598fa8291b"
	I0816 05:34:50.012468    8654 logs.go:123] Gathering logs for storage-provisioner [e4a387b28249] ...
	I0816 05:34:50.012486    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a387b28249"
	I0816 05:34:52.526230    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:34:53.626298    8876 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 05:34:53.630342    8876 api_server.go:72] duration metric: took 1.006194667s to wait for apiserver process to appear ...
	I0816 05:34:53.630353    8876 api_server.go:88] waiting for apiserver healthz status ...
	I0816 05:34:53.630368    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:34:57.528425    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:34:57.528600    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:34:57.540005    8654 logs.go:276] 2 containers: [1c1df0a24283 7da996bebe3e]
	I0816 05:34:57.540079    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:34:57.551041    8654 logs.go:276] 2 containers: [908e9b841803 c5598fa8291b]
	I0816 05:34:57.551111    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:34:57.561519    8654 logs.go:276] 1 containers: [f86c0ca08a29]
	I0816 05:34:57.561593    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:34:57.576177    8654 logs.go:276] 2 containers: [82a7160cf6b3 be9ff0533784]
	I0816 05:34:57.576248    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:34:57.586587    8654 logs.go:276] 1 containers: [41826d2a89be]
	I0816 05:34:57.586656    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:34:57.597254    8654 logs.go:276] 2 containers: [09e3f6eaf95c 258b4e54effd]
	I0816 05:34:57.597328    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:34:57.607926    8654 logs.go:276] 0 containers: []
	W0816 05:34:57.607937    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:34:57.607998    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:34:57.619393    8654 logs.go:276] 2 containers: [da3ee567efaa e4a387b28249]
	I0816 05:34:57.619413    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:34:57.619419    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:34:57.660066    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:34:57.660075    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:34:57.664750    8654 logs.go:123] Gathering logs for kube-apiserver [1c1df0a24283] ...
	I0816 05:34:57.664757    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c1df0a24283"
	I0816 05:34:57.680333    8654 logs.go:123] Gathering logs for kube-controller-manager [09e3f6eaf95c] ...
	I0816 05:34:57.680348    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e3f6eaf95c"
	I0816 05:34:57.698328    8654 logs.go:123] Gathering logs for storage-provisioner [da3ee567efaa] ...
	I0816 05:34:57.698342    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da3ee567efaa"
	I0816 05:34:57.710037    8654 logs.go:123] Gathering logs for kube-scheduler [be9ff0533784] ...
	I0816 05:34:57.710048    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be9ff0533784"
	I0816 05:34:57.730468    8654 logs.go:123] Gathering logs for kube-controller-manager [258b4e54effd] ...
	I0816 05:34:57.730478    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258b4e54effd"
	I0816 05:34:57.742135    8654 logs.go:123] Gathering logs for storage-provisioner [e4a387b28249] ...
	I0816 05:34:57.742150    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a387b28249"
	I0816 05:34:57.755146    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:34:57.755159    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:34:57.791281    8654 logs.go:123] Gathering logs for kube-apiserver [7da996bebe3e] ...
	I0816 05:34:57.791294    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da996bebe3e"
	I0816 05:34:57.802745    8654 logs.go:123] Gathering logs for etcd [908e9b841803] ...
	I0816 05:34:57.802756    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 908e9b841803"
	I0816 05:34:57.817398    8654 logs.go:123] Gathering logs for etcd [c5598fa8291b] ...
	I0816 05:34:57.817408    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5598fa8291b"
	I0816 05:34:57.828376    8654 logs.go:123] Gathering logs for kube-scheduler [82a7160cf6b3] ...
	I0816 05:34:57.828390    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a7160cf6b3"
	I0816 05:34:57.841297    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:34:57.841309    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:34:57.864221    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:34:57.864231    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:34:57.876166    8654 logs.go:123] Gathering logs for coredns [f86c0ca08a29] ...
	I0816 05:34:57.876179    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86c0ca08a29"
	I0816 05:34:57.888638    8654 logs.go:123] Gathering logs for kube-proxy [41826d2a89be] ...
	I0816 05:34:57.888648    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41826d2a89be"
	I0816 05:34:58.632362    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:34:58.632383    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:35:00.403266    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:35:03.632494    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:35:03.632544    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:35:05.405436    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:35:05.405639    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:35:05.420025    8654 logs.go:276] 2 containers: [1c1df0a24283 7da996bebe3e]
	I0816 05:35:05.420110    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:35:05.431600    8654 logs.go:276] 2 containers: [908e9b841803 c5598fa8291b]
	I0816 05:35:05.431683    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:35:05.442235    8654 logs.go:276] 1 containers: [f86c0ca08a29]
	I0816 05:35:05.442312    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:35:05.452656    8654 logs.go:276] 2 containers: [82a7160cf6b3 be9ff0533784]
	I0816 05:35:05.452724    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:35:05.463340    8654 logs.go:276] 1 containers: [41826d2a89be]
	I0816 05:35:05.463408    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:35:05.474402    8654 logs.go:276] 2 containers: [09e3f6eaf95c 258b4e54effd]
	I0816 05:35:05.474469    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:35:05.485025    8654 logs.go:276] 0 containers: []
	W0816 05:35:05.485038    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:35:05.485101    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:35:05.495319    8654 logs.go:276] 2 containers: [da3ee567efaa e4a387b28249]
	I0816 05:35:05.495335    8654 logs.go:123] Gathering logs for etcd [c5598fa8291b] ...
	I0816 05:35:05.495343    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5598fa8291b"
	I0816 05:35:05.506632    8654 logs.go:123] Gathering logs for coredns [f86c0ca08a29] ...
	I0816 05:35:05.506645    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86c0ca08a29"
	I0816 05:35:05.518196    8654 logs.go:123] Gathering logs for kube-scheduler [82a7160cf6b3] ...
	I0816 05:35:05.518209    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a7160cf6b3"
	I0816 05:35:05.529923    8654 logs.go:123] Gathering logs for kube-controller-manager [258b4e54effd] ...
	I0816 05:35:05.529936    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258b4e54effd"
	I0816 05:35:05.541671    8654 logs.go:123] Gathering logs for storage-provisioner [da3ee567efaa] ...
	I0816 05:35:05.541702    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da3ee567efaa"
	I0816 05:35:05.553510    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:35:05.553521    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:35:05.577246    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:35:05.577256    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:35:05.618826    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:35:05.618836    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:35:05.653583    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:35:05.653594    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:35:05.666887    8654 logs.go:123] Gathering logs for kube-apiserver [7da996bebe3e] ...
	I0816 05:35:05.666910    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da996bebe3e"
	I0816 05:35:05.677866    8654 logs.go:123] Gathering logs for storage-provisioner [e4a387b28249] ...
	I0816 05:35:05.677878    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a387b28249"
	I0816 05:35:05.689528    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:35:05.689540    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:35:05.693964    8654 logs.go:123] Gathering logs for etcd [908e9b841803] ...
	I0816 05:35:05.693973    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 908e9b841803"
	I0816 05:35:05.712621    8654 logs.go:123] Gathering logs for kube-proxy [41826d2a89be] ...
	I0816 05:35:05.712631    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41826d2a89be"
	I0816 05:35:05.724574    8654 logs.go:123] Gathering logs for kube-controller-manager [09e3f6eaf95c] ...
	I0816 05:35:05.724588    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e3f6eaf95c"
	I0816 05:35:05.743828    8654 logs.go:123] Gathering logs for kube-apiserver [1c1df0a24283] ...
	I0816 05:35:05.743838    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c1df0a24283"
	I0816 05:35:05.761186    8654 logs.go:123] Gathering logs for kube-scheduler [be9ff0533784] ...
	I0816 05:35:05.761196    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be9ff0533784"
	I0816 05:35:08.278487    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:35:08.632884    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:35:08.632923    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:35:13.280300    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:35:13.280409    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:35:13.291420    8654 logs.go:276] 2 containers: [1c1df0a24283 7da996bebe3e]
	I0816 05:35:13.291497    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:35:13.301939    8654 logs.go:276] 2 containers: [908e9b841803 c5598fa8291b]
	I0816 05:35:13.302015    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:35:13.312665    8654 logs.go:276] 1 containers: [f86c0ca08a29]
	I0816 05:35:13.312736    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:35:13.323816    8654 logs.go:276] 2 containers: [82a7160cf6b3 be9ff0533784]
	I0816 05:35:13.323889    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:35:13.334472    8654 logs.go:276] 1 containers: [41826d2a89be]
	I0816 05:35:13.334542    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:35:13.345049    8654 logs.go:276] 2 containers: [09e3f6eaf95c 258b4e54effd]
	I0816 05:35:13.345121    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:35:13.355231    8654 logs.go:276] 0 containers: []
	W0816 05:35:13.355242    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:35:13.355301    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:35:13.365696    8654 logs.go:276] 2 containers: [da3ee567efaa e4a387b28249]
	I0816 05:35:13.365713    8654 logs.go:123] Gathering logs for kube-controller-manager [258b4e54effd] ...
	I0816 05:35:13.365719    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258b4e54effd"
	I0816 05:35:13.377019    8654 logs.go:123] Gathering logs for kube-apiserver [1c1df0a24283] ...
	I0816 05:35:13.377029    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c1df0a24283"
	I0816 05:35:13.390998    8654 logs.go:123] Gathering logs for etcd [908e9b841803] ...
	I0816 05:35:13.391008    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 908e9b841803"
	I0816 05:35:13.405341    8654 logs.go:123] Gathering logs for coredns [f86c0ca08a29] ...
	I0816 05:35:13.405448    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86c0ca08a29"
	I0816 05:35:13.417046    8654 logs.go:123] Gathering logs for kube-controller-manager [09e3f6eaf95c] ...
	I0816 05:35:13.417059    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e3f6eaf95c"
	I0816 05:35:13.434368    8654 logs.go:123] Gathering logs for storage-provisioner [da3ee567efaa] ...
	I0816 05:35:13.434382    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da3ee567efaa"
	I0816 05:35:13.446260    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:35:13.446271    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:35:13.470290    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:35:13.470301    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:35:13.512708    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:35:13.512725    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:35:13.547621    8654 logs.go:123] Gathering logs for kube-proxy [41826d2a89be] ...
	I0816 05:35:13.547634    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41826d2a89be"
	I0816 05:35:13.559643    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:35:13.559654    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:35:13.571278    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:35:13.571291    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:35:13.575792    8654 logs.go:123] Gathering logs for kube-scheduler [82a7160cf6b3] ...
	I0816 05:35:13.575799    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a7160cf6b3"
	I0816 05:35:13.587559    8654 logs.go:123] Gathering logs for kube-scheduler [be9ff0533784] ...
	I0816 05:35:13.587571    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be9ff0533784"
	I0816 05:35:13.602594    8654 logs.go:123] Gathering logs for storage-provisioner [e4a387b28249] ...
	I0816 05:35:13.602605    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a387b28249"
	I0816 05:35:13.613928    8654 logs.go:123] Gathering logs for kube-apiserver [7da996bebe3e] ...
	I0816 05:35:13.613939    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da996bebe3e"
	I0816 05:35:13.625344    8654 logs.go:123] Gathering logs for etcd [c5598fa8291b] ...
	I0816 05:35:13.625357    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5598fa8291b"
	I0816 05:35:13.633269    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:35:13.633293    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:35:16.138649    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:35:18.633882    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:35:18.633918    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:35:21.140473    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:35:21.140574    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:35:21.151911    8654 logs.go:276] 2 containers: [1c1df0a24283 7da996bebe3e]
	I0816 05:35:21.151994    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:35:21.162731    8654 logs.go:276] 2 containers: [908e9b841803 c5598fa8291b]
	I0816 05:35:21.162804    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:35:21.173072    8654 logs.go:276] 1 containers: [f86c0ca08a29]
	I0816 05:35:21.173141    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:35:21.184016    8654 logs.go:276] 2 containers: [82a7160cf6b3 be9ff0533784]
	I0816 05:35:21.184084    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:35:21.195284    8654 logs.go:276] 1 containers: [41826d2a89be]
	I0816 05:35:21.195356    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:35:21.205715    8654 logs.go:276] 2 containers: [09e3f6eaf95c 258b4e54effd]
	I0816 05:35:21.205792    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:35:21.216114    8654 logs.go:276] 0 containers: []
	W0816 05:35:21.216126    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:35:21.216188    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:35:21.226465    8654 logs.go:276] 2 containers: [da3ee567efaa e4a387b28249]
	I0816 05:35:21.226480    8654 logs.go:123] Gathering logs for etcd [c5598fa8291b] ...
	I0816 05:35:21.226486    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5598fa8291b"
	I0816 05:35:21.237893    8654 logs.go:123] Gathering logs for coredns [f86c0ca08a29] ...
	I0816 05:35:21.237905    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86c0ca08a29"
	I0816 05:35:21.253028    8654 logs.go:123] Gathering logs for storage-provisioner [da3ee567efaa] ...
	I0816 05:35:21.253040    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da3ee567efaa"
	I0816 05:35:21.266341    8654 logs.go:123] Gathering logs for storage-provisioner [e4a387b28249] ...
	I0816 05:35:21.266351    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a387b28249"
	I0816 05:35:21.278062    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:35:21.278073    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:35:21.301833    8654 logs.go:123] Gathering logs for kube-apiserver [1c1df0a24283] ...
	I0816 05:35:21.301840    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c1df0a24283"
	I0816 05:35:21.316576    8654 logs.go:123] Gathering logs for kube-proxy [41826d2a89be] ...
	I0816 05:35:21.316588    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41826d2a89be"
	I0816 05:35:21.328696    8654 logs.go:123] Gathering logs for kube-controller-manager [258b4e54effd] ...
	I0816 05:35:21.328707    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258b4e54effd"
	I0816 05:35:21.340759    8654 logs.go:123] Gathering logs for kube-scheduler [be9ff0533784] ...
	I0816 05:35:21.340770    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be9ff0533784"
	I0816 05:35:21.356650    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:35:21.356660    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:35:21.395322    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:35:21.395334    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:35:21.436604    8654 logs.go:123] Gathering logs for kube-apiserver [7da996bebe3e] ...
	I0816 05:35:21.436612    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da996bebe3e"
	I0816 05:35:21.455375    8654 logs.go:123] Gathering logs for etcd [908e9b841803] ...
	I0816 05:35:21.455388    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 908e9b841803"
	I0816 05:35:21.469138    8654 logs.go:123] Gathering logs for kube-scheduler [82a7160cf6b3] ...
	I0816 05:35:21.469148    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a7160cf6b3"
	I0816 05:35:21.491202    8654 logs.go:123] Gathering logs for kube-controller-manager [09e3f6eaf95c] ...
	I0816 05:35:21.491213    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e3f6eaf95c"
	I0816 05:35:21.518423    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:35:21.518436    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:35:21.531000    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:35:21.531011    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:35:24.037649    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:35:23.634619    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:35:23.634668    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:35:29.040272    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:35:29.040385    8654 kubeadm.go:597] duration metric: took 4m4.580362875s to restartPrimaryControlPlane
	W0816 05:35:29.040463    8654 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 05:35:29.040500    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0816 05:35:30.043030    8654 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.002534208s)
	I0816 05:35:30.043093    8654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 05:35:30.048124    8654 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 05:35:30.051426    8654 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 05:35:30.054201    8654 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 05:35:30.054207    8654 kubeadm.go:157] found existing configuration files:
	
	I0816 05:35:30.054234    8654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51173 /etc/kubernetes/admin.conf
	I0816 05:35:30.056738    8654 kubeadm.go:163] "https://control-plane.minikube.internal:51173" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51173 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 05:35:30.056762    8654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 05:35:30.059964    8654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51173 /etc/kubernetes/kubelet.conf
	I0816 05:35:30.063038    8654 kubeadm.go:163] "https://control-plane.minikube.internal:51173" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51173 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 05:35:30.063066    8654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 05:35:30.065727    8654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51173 /etc/kubernetes/controller-manager.conf
	I0816 05:35:30.068574    8654 kubeadm.go:163] "https://control-plane.minikube.internal:51173" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51173 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 05:35:30.068595    8654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 05:35:30.071727    8654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51173 /etc/kubernetes/scheduler.conf
	I0816 05:35:30.074394    8654 kubeadm.go:163] "https://control-plane.minikube.internal:51173" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51173 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 05:35:30.074415    8654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 05:35:30.076923    8654 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 05:35:30.093763    8654 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0816 05:35:30.093807    8654 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 05:35:30.142570    8654 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 05:35:30.142629    8654 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 05:35:30.142683    8654 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 05:35:30.194901    8654 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 05:35:30.200146    8654 out.go:235]   - Generating certificates and keys ...
	I0816 05:35:30.200183    8654 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 05:35:30.200224    8654 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 05:35:30.200271    8654 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 05:35:30.200317    8654 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 05:35:30.200356    8654 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 05:35:30.200388    8654 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 05:35:30.200427    8654 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 05:35:30.200459    8654 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 05:35:30.200496    8654 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 05:35:30.200531    8654 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 05:35:30.200558    8654 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 05:35:30.200587    8654 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 05:35:30.371424    8654 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 05:35:30.651194    8654 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 05:35:30.727753    8654 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 05:35:30.832352    8654 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 05:35:30.865042    8654 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 05:35:30.865330    8654 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 05:35:30.865381    8654 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 05:35:30.934119    8654 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 05:35:28.635771    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:35:28.635825    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:35:30.939208    8654 out.go:235]   - Booting up control plane ...
	I0816 05:35:30.939263    8654 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 05:35:30.939326    8654 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 05:35:30.939366    8654 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 05:35:30.939408    8654 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 05:35:30.939493    8654 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 05:35:35.945735    8654 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.007389 seconds
	I0816 05:35:35.945921    8654 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 05:35:35.960715    8654 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 05:35:36.478363    8654 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 05:35:36.478482    8654 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-607000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 05:35:36.982555    8654 kubeadm.go:310] [bootstrap-token] Using token: zhwf8w.yn3s54awl8nvlo1t
	I0816 05:35:36.986096    8654 out.go:235]   - Configuring RBAC rules ...
	I0816 05:35:36.986160    8654 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 05:35:36.986205    8654 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 05:35:36.988222    8654 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 05:35:36.992454    8654 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 05:35:36.993265    8654 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 05:35:36.994450    8654 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 05:35:36.997548    8654 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 05:35:37.172024    8654 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 05:35:37.386124    8654 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 05:35:37.386695    8654 kubeadm.go:310] 
	I0816 05:35:37.386729    8654 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 05:35:37.386733    8654 kubeadm.go:310] 
	I0816 05:35:37.386782    8654 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 05:35:37.386796    8654 kubeadm.go:310] 
	I0816 05:35:37.386818    8654 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 05:35:37.386848    8654 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 05:35:37.386882    8654 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 05:35:37.386886    8654 kubeadm.go:310] 
	I0816 05:35:37.386920    8654 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 05:35:37.386928    8654 kubeadm.go:310] 
	I0816 05:35:37.386959    8654 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 05:35:37.386962    8654 kubeadm.go:310] 
	I0816 05:35:37.386988    8654 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 05:35:37.387043    8654 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 05:35:37.387078    8654 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 05:35:37.387081    8654 kubeadm.go:310] 
	I0816 05:35:37.387123    8654 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 05:35:37.387186    8654 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 05:35:37.387192    8654 kubeadm.go:310] 
	I0816 05:35:37.387268    8654 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token zhwf8w.yn3s54awl8nvlo1t \
	I0816 05:35:37.387319    8654 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:23cf10825d548a004e2d3ef8e1c65218486081db837b36803636fece4fac457f \
	I0816 05:35:37.387331    8654 kubeadm.go:310] 	--control-plane 
	I0816 05:35:37.387336    8654 kubeadm.go:310] 
	I0816 05:35:37.387378    8654 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 05:35:37.387382    8654 kubeadm.go:310] 
	I0816 05:35:37.387422    8654 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token zhwf8w.yn3s54awl8nvlo1t \
	I0816 05:35:37.387475    8654 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:23cf10825d548a004e2d3ef8e1c65218486081db837b36803636fece4fac457f 
	I0816 05:35:37.387704    8654 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 05:35:37.387715    8654 cni.go:84] Creating CNI manager for ""
	I0816 05:35:37.387723    8654 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 05:35:37.391367    8654 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 05:35:37.398289    8654 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 05:35:37.401507    8654 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 05:35:37.406994    8654 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 05:35:37.407062    8654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 05:35:37.407069    8654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-607000 minikube.k8s.io/updated_at=2024_08_16T05_35_37_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05 minikube.k8s.io/name=running-upgrade-607000 minikube.k8s.io/primary=true
	I0816 05:35:37.449384    8654 ops.go:34] apiserver oom_adj: -16
	I0816 05:35:37.449456    8654 kubeadm.go:1113] duration metric: took 42.437375ms to wait for elevateKubeSystemPrivileges
	I0816 05:35:37.449467    8654 kubeadm.go:394] duration metric: took 4m13.003487625s to StartCluster
	I0816 05:35:37.449476    8654 settings.go:142] acquiring lock: {Name:mkec9dae897ed6cd1355cb2ba10161c54c163fba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:35:37.449648    8654 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:35:37.450051    8654 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-6249/kubeconfig: {Name:mka7b2a1dac03f0ea4ac28563b4fe884a2b1b206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:35:37.450273    8654 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:35:37.450302    8654 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 05:35:37.450341    8654 config.go:182] Loaded profile config "running-upgrade-607000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0816 05:35:37.450344    8654 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-607000"
	I0816 05:35:37.450341    8654 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-607000"
	I0816 05:35:37.450366    8654 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-607000"
	I0816 05:35:37.450370    8654 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-607000"
	W0816 05:35:37.450386    8654 addons.go:243] addon storage-provisioner should already be in state true
	I0816 05:35:37.450395    8654 host.go:66] Checking if "running-upgrade-607000" exists ...
	I0816 05:35:37.451531    8654 kapi.go:59] client config for running-upgrade-607000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/running-upgrade-607000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/running-upgrade-607000/client.key", CAFile:"/Users/jenkins/minikube-integration/19423-6249/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101ee1610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 05:35:37.452416    8654 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-607000"
	W0816 05:35:37.452429    8654 addons.go:243] addon default-storageclass should already be in state true
	I0816 05:35:37.452442    8654 host.go:66] Checking if "running-upgrade-607000" exists ...
	I0816 05:35:37.454305    8654 out.go:177] * Verifying Kubernetes components...
	I0816 05:35:37.454731    8654 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 05:35:37.458571    8654 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 05:35:37.458578    8654 sshutil.go:53] new ssh client: &{IP:localhost Port:51141 SSHKeyPath:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/running-upgrade-607000/id_rsa Username:docker}
	I0816 05:35:37.462332    8654 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 05:35:33.637003    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:35:33.637029    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:35:37.466278    8654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:35:37.470356    8654 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 05:35:37.470364    8654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 05:35:37.470371    8654 sshutil.go:53] new ssh client: &{IP:localhost Port:51141 SSHKeyPath:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/running-upgrade-607000/id_rsa Username:docker}
	I0816 05:35:37.538942    8654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 05:35:37.543779    8654 api_server.go:52] waiting for apiserver process to appear ...
	I0816 05:35:37.543822    8654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 05:35:37.547805    8654 api_server.go:72] duration metric: took 97.523792ms to wait for apiserver process to appear ...
	I0816 05:35:37.547814    8654 api_server.go:88] waiting for apiserver healthz status ...
	I0816 05:35:37.547821    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:35:37.561387    8654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 05:35:37.607545    8654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 05:35:37.879636    8654 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0816 05:35:37.879647    8654 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0816 05:35:38.638503    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:35:38.638534    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:35:42.549843    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:35:42.549887    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:35:43.639576    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:35:43.639620    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:35:47.550069    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:35:47.550094    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:35:48.641847    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:35:48.641869    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:35:52.550308    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:35:52.550335    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:35:53.643210    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:35:53.643380    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:35:53.655584    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:35:53.655660    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:35:53.666552    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:35:53.666628    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:35:53.676664    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:35:53.676735    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:35:53.686902    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:35:53.686975    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:35:53.697761    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:35:53.697831    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:35:53.711032    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:35:53.711128    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:35:53.721464    8876 logs.go:276] 0 containers: []
	W0816 05:35:53.721475    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:35:53.721535    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:35:53.731758    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:35:53.731780    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:35:53.731788    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:35:53.745735    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:35:53.745747    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:35:53.757664    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:35:53.757674    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:35:53.774338    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:35:53.774348    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:35:53.785895    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:35:53.785905    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:35:53.798248    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:35:53.798259    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:35:53.838444    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:35:53.838457    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:35:53.854221    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:35:53.854237    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:35:53.871994    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:35:53.872007    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:35:53.896995    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:35:53.897002    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:35:53.915355    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:35:53.915365    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:35:54.025248    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:35:54.025260    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:35:54.029303    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:35:54.029309    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:35:54.043538    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:35:54.043554    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:35:54.059329    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:35:54.059339    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:35:54.071726    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:35:54.071737    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:35:54.084994    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:35:54.085008    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:35:56.629300    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:35:57.550641    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:35:57.550669    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:36:01.629505    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:36:01.629664    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:36:01.644333    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:36:01.644408    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:36:01.664607    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:36:01.664685    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:36:01.675912    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:36:01.675987    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:36:01.686612    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:36:01.686683    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:36:01.697236    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:36:01.697305    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:36:01.708554    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:36:01.708623    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:36:01.718498    8876 logs.go:276] 0 containers: []
	W0816 05:36:01.718512    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:36:01.718576    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:36:01.729926    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:36:01.729943    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:36:01.729949    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:36:01.743316    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:36:01.743326    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:36:01.755565    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:36:01.755576    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:36:01.769786    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:36:01.769796    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:36:01.781597    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:36:01.781609    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:36:01.795852    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:36:01.795863    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:36:01.800746    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:36:01.800752    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:36:01.811760    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:36:01.811771    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:36:01.826745    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:36:01.826756    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:36:01.843647    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:36:01.843657    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:36:01.856593    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:36:01.856603    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:36:01.895426    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:36:01.895435    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:36:01.933863    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:36:01.933874    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:36:01.972091    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:36:01.972101    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:36:01.997906    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:36:01.997917    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:36:02.017932    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:36:02.017943    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:36:02.038915    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:36:02.038926    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:36:02.551087    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:36:02.551126    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:36:07.551759    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:36:07.551808    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0816 05:36:07.881556    8654 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0816 05:36:07.886278    8654 out.go:177] * Enabled addons: storage-provisioner
	I0816 05:36:04.554215    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:36:07.896193    8654 addons.go:510] duration metric: took 30.446415333s for enable addons: enabled=[storage-provisioner]
	I0816 05:36:09.556466    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:36:09.556662    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:36:09.575069    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:36:09.575166    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:36:09.588138    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:36:09.588215    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:36:09.602480    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:36:09.602553    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:36:09.617430    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:36:09.617526    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:36:09.628065    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:36:09.628132    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:36:09.638721    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:36:09.638796    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:36:09.648761    8876 logs.go:276] 0 containers: []
	W0816 05:36:09.648772    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:36:09.648834    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:36:09.659317    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:36:09.659335    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:36:09.659341    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:36:09.673808    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:36:09.673819    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:36:09.690852    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:36:09.690864    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:36:09.702010    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:36:09.702022    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:36:09.716344    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:36:09.716353    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:36:09.727359    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:36:09.727370    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:36:09.752995    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:36:09.753008    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:36:09.791158    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:36:09.791167    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:36:09.795704    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:36:09.795712    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:36:09.836361    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:36:09.836372    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:36:09.850331    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:36:09.850340    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:36:09.862626    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:36:09.862636    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:36:09.874081    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:36:09.874092    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:36:09.891084    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:36:09.891095    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:36:09.904187    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:36:09.904201    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:36:09.937383    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:36:09.937394    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:36:09.951878    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:36:09.951888    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:36:12.467349    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:36:12.552615    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:36:12.552645    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:36:17.469629    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:36:17.469819    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:36:17.491881    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:36:17.491979    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:36:17.507036    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:36:17.507120    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:36:17.519393    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:36:17.519469    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:36:17.530526    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:36:17.530592    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:36:17.540538    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:36:17.540600    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:36:17.550880    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:36:17.550957    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:36:17.561360    8876 logs.go:276] 0 containers: []
	W0816 05:36:17.561370    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:36:17.561427    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:36:17.571710    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:36:17.571727    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:36:17.571733    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:36:17.610895    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:36:17.610905    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:36:17.625336    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:36:17.625349    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:36:17.637310    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:36:17.637321    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:36:17.652596    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:36:17.652608    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:36:17.663890    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:36:17.663905    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:36:17.676831    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:36:17.676843    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:36:17.680988    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:36:17.680994    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:36:17.719436    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:36:17.719451    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:36:17.733528    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:36:17.733539    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:36:17.747653    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:36:17.747664    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:36:17.759433    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:36:17.759447    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:36:17.772140    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:36:17.772153    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:36:17.811879    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:36:17.811896    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:36:17.827318    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:36:17.827333    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:36:17.845346    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:36:17.845357    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:36:17.869640    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:36:17.869649    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:36:17.553581    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:36:17.553596    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:36:20.383174    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:36:22.554792    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:36:22.554816    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:36:25.385708    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:36:25.385909    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:36:25.402508    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:36:25.402595    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:36:25.415773    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:36:25.415848    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:36:25.426886    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:36:25.426958    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:36:25.437551    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:36:25.437628    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:36:25.447756    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:36:25.447823    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:36:25.457952    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:36:25.458026    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:36:25.468065    8876 logs.go:276] 0 containers: []
	W0816 05:36:25.468077    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:36:25.468137    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:36:25.478521    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:36:25.478541    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:36:25.478546    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:36:25.501488    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:36:25.501506    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:36:25.516184    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:36:25.516200    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:36:25.529856    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:36:25.529867    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:36:25.552090    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:36:25.552101    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:36:25.566359    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:36:25.566368    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:36:25.581034    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:36:25.581045    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:36:25.598368    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:36:25.598378    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:36:25.610047    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:36:25.610057    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:36:25.634594    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:36:25.634605    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:36:25.670222    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:36:25.670232    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:36:25.681396    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:36:25.681408    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:36:25.685666    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:36:25.685672    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:36:25.696940    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:36:25.696952    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:36:25.709173    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:36:25.709187    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:36:25.721105    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:36:25.721118    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:36:25.760300    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:36:25.760308    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:36:27.556378    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:36:27.556412    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:36:28.300287    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:36:32.557862    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:36:32.557886    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:36:33.300569    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:36:33.300717    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:36:33.317112    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:36:33.317201    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:36:33.331176    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:36:33.331252    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:36:33.342051    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:36:33.342120    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:36:33.353285    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:36:33.353354    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:36:33.363453    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:36:33.363522    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:36:33.373845    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:36:33.373905    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:36:33.383953    8876 logs.go:276] 0 containers: []
	W0816 05:36:33.383966    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:36:33.384029    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:36:33.394814    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:36:33.394830    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:36:33.394836    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:36:33.409205    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:36:33.409216    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:36:33.424235    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:36:33.424245    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:36:33.440056    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:36:33.440070    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:36:33.478150    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:36:33.478163    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:36:33.494012    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:36:33.494023    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:36:33.506022    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:36:33.506034    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:36:33.529615    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:36:33.529624    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:36:33.567836    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:36:33.567849    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:36:33.583180    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:36:33.583193    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:36:33.601323    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:36:33.601334    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:36:33.613281    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:36:33.613294    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:36:33.625861    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:36:33.625874    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:36:33.637223    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:36:33.637234    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:36:33.651079    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:36:33.651093    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:36:33.664395    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:36:33.664405    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:36:33.668709    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:36:33.668718    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:36:36.205655    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:36:37.560027    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:36:37.560141    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:36:37.576906    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:36:37.576981    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:36:37.588896    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:36:37.588977    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:36:37.599309    8654 logs.go:276] 2 containers: [e87bc196aca8 fbb13a6d2faf]
	I0816 05:36:37.599385    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:36:37.611601    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:36:37.611668    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:36:37.621779    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:36:37.621840    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:36:37.638009    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:36:37.638083    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:36:37.652272    8654 logs.go:276] 0 containers: []
	W0816 05:36:37.652285    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:36:37.652337    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:36:37.662582    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:36:37.662597    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:36:37.662603    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:36:37.666894    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:36:37.666900    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:36:37.702279    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:36:37.702294    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:36:37.716534    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:36:37.716544    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:36:37.728292    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:36:37.728302    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:36:37.749815    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:36:37.749827    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:36:37.760763    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:36:37.760773    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:36:37.795752    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:36:37.795761    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:36:37.807187    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:36:37.807197    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:36:37.824244    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:36:37.824256    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:36:37.842539    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:36:37.842547    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:36:37.867604    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:36:37.867616    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:36:37.879208    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:36:37.879220    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:36:41.208031    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:36:41.208147    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:36:41.220958    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:36:41.221026    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:36:41.231946    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:36:41.232024    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:36:41.250015    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:36:41.250083    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:36:41.260778    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:36:41.260850    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:36:41.273400    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:36:41.273474    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:36:41.284494    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:36:41.284567    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:36:41.295392    8876 logs.go:276] 0 containers: []
	W0816 05:36:41.295408    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:36:41.295470    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:36:41.309808    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:36:41.309829    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:36:41.309835    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:36:41.345522    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:36:41.345533    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:36:41.359163    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:36:41.359175    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:36:41.373517    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:36:41.373528    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:36:41.377714    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:36:41.377723    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:36:41.392463    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:36:41.392473    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:36:41.431321    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:36:41.431332    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:36:41.445731    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:36:41.445740    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:36:41.460404    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:36:41.460414    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:36:41.475292    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:36:41.475302    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:36:41.498921    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:36:41.498934    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:36:41.537346    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:36:41.537360    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:36:41.549070    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:36:41.549080    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:36:41.560698    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:36:41.560724    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:36:41.572477    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:36:41.572490    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:36:41.593373    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:36:41.593384    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:36:41.610782    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:36:41.610792    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:36:40.399780    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:36:44.124362    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:36:45.402078    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:36:45.402248    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:36:45.414333    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:36:45.414410    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:36:45.425152    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:36:45.425230    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:36:45.435431    8654 logs.go:276] 2 containers: [e87bc196aca8 fbb13a6d2faf]
	I0816 05:36:45.435496    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:36:45.446148    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:36:45.446218    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:36:45.456318    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:36:45.456394    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:36:45.466724    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:36:45.466794    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:36:45.476895    8654 logs.go:276] 0 containers: []
	W0816 05:36:45.476909    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:36:45.476976    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:36:45.487559    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:36:45.487577    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:36:45.487582    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:36:45.498917    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:36:45.498929    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:36:45.515153    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:36:45.515165    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:36:45.529907    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:36:45.529920    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:36:45.540771    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:36:45.540783    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:36:45.575507    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:36:45.575517    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:36:45.580354    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:36:45.580361    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:36:45.594599    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:36:45.594611    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:36:45.608290    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:36:45.608303    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:36:45.632175    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:36:45.632185    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:36:45.643584    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:36:45.643597    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:36:45.679813    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:36:45.679827    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:36:45.696043    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:36:45.696057    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:36:48.219276    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:36:49.126559    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:36:49.126663    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:36:49.138547    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:36:49.138619    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:36:49.149330    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:36:49.149392    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:36:49.159221    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:36:49.159284    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:36:49.169954    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:36:49.170030    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:36:49.180711    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:36:49.180786    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:36:49.191848    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:36:49.191916    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:36:49.201737    8876 logs.go:276] 0 containers: []
	W0816 05:36:49.201750    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:36:49.201815    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:36:49.212630    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:36:49.212647    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:36:49.212655    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:36:49.217041    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:36:49.217050    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:36:49.231305    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:36:49.231317    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:36:49.268814    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:36:49.268827    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:36:49.280454    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:36:49.280467    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:36:49.293015    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:36:49.293026    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:36:49.304060    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:36:49.304069    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:36:49.318786    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:36:49.318798    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:36:49.334057    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:36:49.334068    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:36:49.351945    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:36:49.351956    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:36:49.363836    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:36:49.363849    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:36:49.375785    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:36:49.375797    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:36:49.414266    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:36:49.414276    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:36:49.428617    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:36:49.428630    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:36:49.465125    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:36:49.465136    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:36:49.479083    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:36:49.479096    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:36:49.492898    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:36:49.492910    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:36:52.020408    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:36:53.221561    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:36:53.221745    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:36:53.238945    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:36:53.239043    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:36:53.259884    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:36:53.259957    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:36:53.270720    8654 logs.go:276] 2 containers: [e87bc196aca8 fbb13a6d2faf]
	I0816 05:36:53.270794    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:36:53.281692    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:36:53.281759    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:36:53.291968    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:36:53.292034    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:36:53.302404    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:36:53.302469    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:36:53.312219    8654 logs.go:276] 0 containers: []
	W0816 05:36:53.312229    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:36:53.312282    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:36:53.322933    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:36:53.322947    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:36:53.322954    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:36:53.327967    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:36:53.327976    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:36:53.341881    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:36:53.341890    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:36:53.353466    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:36:53.353477    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:36:53.365139    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:36:53.365149    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:36:53.381783    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:36:53.381792    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:36:53.393101    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:36:53.393112    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:36:53.417784    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:36:53.417792    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:36:53.428729    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:36:53.428741    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:36:53.464317    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:36:53.464331    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:36:53.500664    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:36:53.500675    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:36:53.515425    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:36:53.515438    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:36:53.528513    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:36:53.528527    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:36:57.022663    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:36:57.022788    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:36:57.034057    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:36:57.034138    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:36:57.044880    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:36:57.044956    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:36:57.056704    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:36:57.056776    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:36:57.067478    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:36:57.067540    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:36:57.077997    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:36:57.078070    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:36:57.089334    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:36:57.089434    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:36:57.101389    8876 logs.go:276] 0 containers: []
	W0816 05:36:57.101401    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:36:57.101467    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:36:57.111525    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:36:57.111545    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:36:57.111551    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:36:57.150157    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:36:57.150169    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:36:57.188871    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:36:57.188882    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:36:57.203145    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:36:57.203156    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:36:57.218234    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:36:57.218244    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:36:57.231364    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:36:57.231379    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:36:57.244939    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:36:57.244952    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:36:57.249099    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:36:57.249109    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:36:57.260769    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:36:57.260781    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:36:57.272590    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:36:57.272600    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:36:57.284145    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:36:57.284158    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:36:57.296160    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:36:57.296175    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:36:57.308015    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:36:57.308026    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:36:57.333000    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:36:57.333008    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:36:57.346942    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:36:57.346952    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:36:57.386740    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:36:57.386767    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:36:57.400861    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:36:57.400872    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:36:56.045831    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:36:59.920147    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:37:01.048506    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:37:01.048719    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:37:01.066550    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:37:01.066644    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:37:01.080432    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:37:01.080515    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:37:01.092378    8654 logs.go:276] 2 containers: [e87bc196aca8 fbb13a6d2faf]
	I0816 05:37:01.092450    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:37:01.102802    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:37:01.102869    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:37:01.113362    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:37:01.113436    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:37:01.124420    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:37:01.124499    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:37:01.134952    8654 logs.go:276] 0 containers: []
	W0816 05:37:01.134963    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:37:01.135026    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:37:01.145476    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:37:01.145490    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:37:01.145496    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:37:01.149891    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:37:01.149901    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:37:01.163233    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:37:01.163243    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:37:01.175716    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:37:01.175730    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:37:01.187863    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:37:01.187875    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:37:01.212233    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:37:01.212243    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:37:01.236647    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:37:01.236655    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:37:01.271379    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:37:01.271390    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:37:01.289628    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:37:01.289640    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:37:01.301528    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:37:01.301538    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:37:01.316020    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:37:01.316030    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:37:01.327917    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:37:01.327928    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:37:01.339401    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:37:01.339411    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:37:03.885683    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:37:04.922746    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:37:04.922915    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:37:04.938457    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:37:04.938548    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:37:04.950681    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:37:04.950754    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:37:04.961992    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:37:04.962065    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:37:04.972988    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:37:04.973067    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:37:04.986608    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:37:04.986679    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:37:04.997969    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:37:04.998046    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:37:05.008199    8876 logs.go:276] 0 containers: []
	W0816 05:37:05.008213    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:37:05.008277    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:37:05.018652    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:37:05.018674    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:37:05.018681    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:37:05.032705    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:37:05.032715    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:37:05.043740    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:37:05.043753    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:37:05.055266    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:37:05.055276    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:37:05.060004    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:37:05.060011    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:37:05.098416    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:37:05.098428    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:37:05.112552    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:37:05.112566    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:37:05.148745    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:37:05.148758    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:37:05.160862    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:37:05.160872    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:37:05.172458    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:37:05.172472    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:37:05.183553    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:37:05.183567    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:37:05.208647    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:37:05.208657    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:37:05.220026    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:37:05.220038    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:37:05.260609    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:37:05.260629    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:37:05.275495    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:37:05.275507    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:37:05.290557    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:37:05.290569    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:37:05.308766    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:37:05.308776    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:37:07.823833    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:37:08.888018    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:37:08.888366    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:37:08.925714    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:37:08.925854    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:37:08.948683    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:37:08.948778    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:37:08.963571    8654 logs.go:276] 2 containers: [e87bc196aca8 fbb13a6d2faf]
	I0816 05:37:08.963641    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:37:08.975676    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:37:08.975754    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:37:08.991438    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:37:08.991516    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:37:09.005626    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:37:09.005695    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:37:09.015627    8654 logs.go:276] 0 containers: []
	W0816 05:37:09.015638    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:37:09.015699    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:37:09.027504    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:37:09.027521    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:37:09.027526    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:37:09.042105    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:37:09.042119    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:37:09.058731    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:37:09.058745    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:37:09.082908    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:37:09.082916    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:37:09.094806    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:37:09.094820    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:37:09.108661    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:37:09.108673    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:37:09.113162    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:37:09.113171    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:37:09.147518    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:37:09.147531    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:37:12.826118    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:37:12.826335    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:37:12.842295    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:37:12.842379    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:37:12.855416    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:37:12.855491    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:37:12.866521    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:37:12.866596    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:37:12.876704    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:37:12.876768    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:37:12.887584    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:37:12.887659    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:37:12.898689    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:37:12.898754    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:37:12.908933    8876 logs.go:276] 0 containers: []
	W0816 05:37:12.908943    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:37:12.908997    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:37:12.919298    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:37:12.919315    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:37:12.919320    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:37:12.933860    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:37:12.933872    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:37:12.945243    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:37:12.945255    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:37:12.963033    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:37:12.963045    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:37:12.974187    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:37:12.974197    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:37:12.999004    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:37:12.999015    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:37:13.011235    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:37:13.011246    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:37:13.049121    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:37:13.049132    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:37:13.063346    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:37:13.063357    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:37:13.074877    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:37:13.074887    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:37:13.089646    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:37:13.089658    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:37:13.093707    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:37:13.093713    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:37:13.128098    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:37:13.128113    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:37:13.143238    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:37:13.143247    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:37:13.156081    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:37:13.156096    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:37:09.161744    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:37:09.161754    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:37:09.176208    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:37:09.176221    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:37:09.193143    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:37:09.193153    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:37:09.210573    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:37:09.210584    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:37:09.221712    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:37:09.221723    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:37:11.760095    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:37:13.193045    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:37:13.193055    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:37:13.211426    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:37:13.211442    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:37:15.724691    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:37:16.762500    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:37:16.762833    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:37:16.795008    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:37:16.795140    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:37:16.818284    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:37:16.818375    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:37:16.831535    8654 logs.go:276] 2 containers: [e87bc196aca8 fbb13a6d2faf]
	I0816 05:37:16.831613    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:37:16.843317    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:37:16.843386    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:37:16.854100    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:37:16.854174    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:37:16.865306    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:37:16.865378    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:37:16.875220    8654 logs.go:276] 0 containers: []
	W0816 05:37:16.875232    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:37:16.875290    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:37:16.887156    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:37:16.887169    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:37:16.887174    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:37:16.898964    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:37:16.898974    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:37:16.915965    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:37:16.915975    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:37:16.928350    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:37:16.928360    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:37:16.964001    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:37:16.964011    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:37:16.982659    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:37:16.982672    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:37:16.996126    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:37:16.996138    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:37:17.008288    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:37:17.008298    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:37:17.023717    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:37:17.023732    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:37:17.028405    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:37:17.028412    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:37:17.068482    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:37:17.068497    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:37:17.082697    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:37:17.082707    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:37:17.094393    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:37:17.094403    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:37:20.727081    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:37:20.727448    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:37:20.766315    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:37:20.766453    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:37:20.788036    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:37:20.788145    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:37:20.808091    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:37:20.808165    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:37:20.824910    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:37:20.824982    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:37:20.836256    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:37:20.836325    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:37:20.847170    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:37:20.847237    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:37:20.857620    8876 logs.go:276] 0 containers: []
	W0816 05:37:20.857636    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:37:20.857695    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:37:20.868364    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:37:20.868381    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:37:20.868387    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:37:20.907534    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:37:20.907545    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:37:20.942803    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:37:20.942813    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:37:20.982421    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:37:20.982435    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:37:20.996810    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:37:20.996821    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:37:21.012812    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:37:21.012823    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:37:21.024422    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:37:21.024432    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:37:21.042630    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:37:21.042640    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:37:21.055677    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:37:21.055693    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:37:21.070757    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:37:21.070768    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:37:21.081753    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:37:21.081763    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:37:21.106035    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:37:21.106045    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:37:21.110018    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:37:21.110027    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:37:21.121522    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:37:21.121534    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:37:21.133343    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:37:21.133354    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:37:21.153411    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:37:21.153425    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:37:21.165087    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:37:21.165098    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:37:19.626499    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:37:23.678636    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:37:24.628677    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:37:24.628824    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:37:24.648030    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:37:24.648126    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:37:24.664125    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:37:24.664199    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:37:24.685230    8654 logs.go:276] 2 containers: [e87bc196aca8 fbb13a6d2faf]
	I0816 05:37:24.685302    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:37:24.695534    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:37:24.695601    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:37:24.705800    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:37:24.705874    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:37:24.716313    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:37:24.716384    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:37:24.726623    8654 logs.go:276] 0 containers: []
	W0816 05:37:24.726635    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:37:24.726698    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:37:24.736921    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:37:24.736939    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:37:24.736944    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:37:24.749597    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:37:24.749608    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:37:24.766847    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:37:24.766857    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:37:24.778714    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:37:24.778724    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:37:24.816341    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:37:24.816351    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:37:24.821027    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:37:24.821036    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:37:24.835083    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:37:24.835092    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:37:24.851682    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:37:24.851694    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:37:24.863339    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:37:24.863350    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:37:24.888459    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:37:24.888465    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:37:24.938880    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:37:24.938892    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:37:24.951314    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:37:24.951323    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:37:24.968927    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:37:24.968936    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:37:27.484734    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:37:28.680966    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:37:28.681179    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:37:28.705731    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:37:28.705851    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:37:28.722128    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:37:28.722207    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:37:28.739420    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:37:28.739498    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:37:28.750897    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:37:28.750970    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:37:28.761392    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:37:28.761461    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:37:28.771569    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:37:28.771639    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:37:28.781408    8876 logs.go:276] 0 containers: []
	W0816 05:37:28.781419    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:37:28.781487    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:37:28.796165    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:37:28.796181    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:37:28.796187    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:37:28.808111    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:37:28.808122    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:37:28.844548    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:37:28.844563    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:37:28.858090    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:37:28.858100    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:37:28.869433    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:37:28.869442    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:37:28.885501    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:37:28.885518    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:37:28.889630    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:37:28.889638    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:37:28.927547    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:37:28.927559    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:37:28.943539    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:37:28.943550    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:37:28.964792    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:37:28.964803    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:37:28.976456    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:37:28.976468    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:37:28.994172    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:37:28.994182    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:37:29.006248    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:37:29.006259    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:37:29.020398    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:37:29.020408    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:37:29.031817    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:37:29.031829    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:37:29.056148    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:37:29.056155    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:37:29.095476    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:37:29.095488    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:37:31.611894    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:37:32.486860    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:37:32.486974    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:37:32.498031    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:37:32.498112    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:37:32.508823    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:37:32.508899    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:37:32.519953    8654 logs.go:276] 2 containers: [e87bc196aca8 fbb13a6d2faf]
	I0816 05:37:32.520027    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:37:32.530455    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:37:32.530529    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:37:32.541378    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:37:32.541450    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:37:32.551953    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:37:32.552022    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:37:32.565485    8654 logs.go:276] 0 containers: []
	W0816 05:37:32.565496    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:37:32.565561    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:37:32.576440    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:37:32.576456    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:37:32.576463    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:37:32.612429    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:37:32.612438    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:37:32.626766    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:37:32.626777    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:37:32.641506    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:37:32.641516    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:37:32.655993    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:37:32.656003    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:37:32.673133    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:37:32.673143    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:37:32.685026    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:37:32.685036    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:37:32.696362    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:37:32.696372    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:37:32.701246    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:37:32.701254    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:37:32.736003    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:37:32.736015    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:37:32.748518    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:37:32.748528    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:37:32.760267    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:37:32.760277    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:37:32.771625    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:37:32.771636    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:37:36.614085    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:37:36.614309    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:37:36.632929    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:37:36.633029    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:37:36.646500    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:37:36.646575    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:37:36.658404    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:37:36.658504    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:37:36.670491    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:37:36.670561    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:37:36.680998    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:37:36.681071    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:37:36.692528    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:37:36.692593    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:37:36.703003    8876 logs.go:276] 0 containers: []
	W0816 05:37:36.703022    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:37:36.703079    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:37:36.713153    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:37:36.713170    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:37:36.713176    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:37:36.750845    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:37:36.750859    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:37:36.764072    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:37:36.764084    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:37:36.775654    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:37:36.775665    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:37:36.780504    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:37:36.780515    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:37:36.797991    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:37:36.798000    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:37:36.810758    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:37:36.810770    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:37:36.822161    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:37:36.822173    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:37:36.843589    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:37:36.843606    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:37:36.882930    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:37:36.882943    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:37:36.921613    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:37:36.921624    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:37:36.935281    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:37:36.935292    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:37:36.946426    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:37:36.946438    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:37:36.971341    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:37:36.971352    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:37:36.987578    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:37:36.987589    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:37:37.002437    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:37:37.002448    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:37:37.022041    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:37:37.022053    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:37:35.297300    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:37:39.540111    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:37:40.299720    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:37:40.300053    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:37:40.331933    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:37:40.332055    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:37:40.351200    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:37:40.351289    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:37:40.365698    8654 logs.go:276] 2 containers: [e87bc196aca8 fbb13a6d2faf]
	I0816 05:37:40.365760    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:37:40.377478    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:37:40.377550    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:37:40.389459    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:37:40.389529    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:37:40.401096    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:37:40.401166    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:37:40.411339    8654 logs.go:276] 0 containers: []
	W0816 05:37:40.411353    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:37:40.411414    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:37:40.422050    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:37:40.422070    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:37:40.422077    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:37:40.459812    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:37:40.459829    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:37:40.464761    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:37:40.464768    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:37:40.501986    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:37:40.501998    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:37:40.518036    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:37:40.518050    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:37:40.533228    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:37:40.533240    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:37:40.550672    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:37:40.550684    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:37:40.575736    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:37:40.575746    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:37:40.590345    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:37:40.590360    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:37:40.604222    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:37:40.604232    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:37:40.616275    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:37:40.616285    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:37:40.631439    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:37:40.631449    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:37:40.642668    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:37:40.642677    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:37:43.156080    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:37:44.542291    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:37:44.542480    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:37:44.557337    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:37:44.557417    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:37:44.568680    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:37:44.568753    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:37:44.578998    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:37:44.579082    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:37:44.589414    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:37:44.589489    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:37:44.599790    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:37:44.599859    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:37:44.610277    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:37:44.610343    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:37:44.620686    8876 logs.go:276] 0 containers: []
	W0816 05:37:44.620698    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:37:44.620762    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:37:44.631919    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:37:44.631936    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:37:44.631942    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:37:44.646204    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:37:44.646217    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:37:44.659897    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:37:44.659909    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:37:44.671033    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:37:44.671045    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:37:44.685760    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:37:44.685771    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:37:44.698572    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:37:44.698583    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:37:44.721409    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:37:44.721416    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:37:44.733317    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:37:44.733328    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:37:44.767571    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:37:44.767582    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:37:44.779123    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:37:44.779135    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:37:44.818139    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:37:44.818149    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:37:44.822413    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:37:44.822421    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:37:44.836980    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:37:44.836990    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:37:44.850055    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:37:44.850066    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:37:44.867208    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:37:44.867219    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:37:44.879107    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:37:44.879117    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:37:44.916498    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:37:44.916510    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:37:47.430997    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:37:48.158388    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:37:48.158611    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:37:48.184467    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:37:48.184587    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:37:48.201627    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:37:48.201714    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:37:48.214785    8654 logs.go:276] 2 containers: [e87bc196aca8 fbb13a6d2faf]
	I0816 05:37:48.214854    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:37:48.226541    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:37:48.226611    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:37:48.237362    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:37:48.237436    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:37:48.247803    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:37:48.247874    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:37:48.262131    8654 logs.go:276] 0 containers: []
	W0816 05:37:48.262142    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:37:48.262202    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:37:48.272466    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:37:48.272482    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:37:48.272487    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:37:48.283998    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:37:48.284008    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:37:48.309120    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:37:48.309135    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:37:48.347222    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:37:48.347240    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:37:48.356354    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:37:48.356368    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:37:48.437820    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:37:48.437834    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:37:48.452422    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:37:48.452434    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:37:48.464477    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:37:48.464491    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:37:48.476226    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:37:48.476236    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:37:48.490226    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:37:48.490239    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:37:48.505902    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:37:48.505913    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:37:48.517595    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:37:48.517605    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:37:48.533177    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:37:48.533187    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:37:52.433197    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:37:52.433324    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:37:52.447047    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:37:52.447129    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:37:52.459109    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:37:52.459184    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:37:52.469597    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:37:52.469673    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:37:52.481845    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:37:52.481916    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:37:52.492477    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:37:52.492547    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:37:52.503692    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:37:52.503758    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:37:52.522414    8876 logs.go:276] 0 containers: []
	W0816 05:37:52.522426    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:37:52.522488    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:37:52.533423    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:37:52.533442    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:37:52.533448    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:37:52.545060    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:37:52.545071    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:37:52.559502    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:37:52.559514    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:37:52.582339    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:37:52.582358    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:37:52.587747    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:37:52.587757    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:37:52.603044    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:37:52.603055    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:37:52.617571    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:37:52.617581    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:37:52.629558    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:37:52.629568    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:37:52.642993    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:37:52.643004    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:37:52.662599    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:37:52.662609    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:37:52.675845    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:37:52.675859    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:37:52.687113    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:37:52.687124    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:37:52.723606    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:37:52.723616    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:37:52.758397    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:37:52.758409    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:37:52.770089    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:37:52.770099    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:37:52.781819    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:37:52.781828    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:37:52.819809    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:37:52.819820    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:37:51.051275    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:37:55.333628    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:37:56.052696    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:37:56.052902    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:37:56.073304    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:37:56.073394    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:37:56.086399    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:37:56.086476    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:37:56.099064    8654 logs.go:276] 4 containers: [d08c19c2b1cc 4f5615c53c6f e87bc196aca8 fbb13a6d2faf]
	I0816 05:37:56.099136    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:37:56.110401    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:37:56.110468    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:37:56.120811    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:37:56.120878    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:37:56.131739    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:37:56.131808    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:37:56.141930    8654 logs.go:276] 0 containers: []
	W0816 05:37:56.141942    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:37:56.142008    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:37:56.152152    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:37:56.152175    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:37:56.152181    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:37:56.156607    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:37:56.156613    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:37:56.168512    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:37:56.168524    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:37:56.183340    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:37:56.183350    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:37:56.209112    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:37:56.209126    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:37:56.223723    8654 logs.go:123] Gathering logs for coredns [d08c19c2b1cc] ...
	I0816 05:37:56.223736    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08c19c2b1cc"
	I0816 05:37:56.235836    8654 logs.go:123] Gathering logs for coredns [4f5615c53c6f] ...
	I0816 05:37:56.235848    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5615c53c6f"
	I0816 05:37:56.247464    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:37:56.247477    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:37:56.259442    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:37:56.259453    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:37:56.270761    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:37:56.270771    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:37:56.306396    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:37:56.306406    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:37:56.317992    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:37:56.318004    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:37:56.335748    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:37:56.335762    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:37:56.347095    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:37:56.347109    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:37:56.384015    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:37:56.384024    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:37:58.904655    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:38:00.335892    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:38:00.336061    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:38:00.353608    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:38:00.353694    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:38:00.369289    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:38:00.369362    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:38:00.380135    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:38:00.380215    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:38:00.391556    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:38:00.391631    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:38:00.401688    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:38:00.401756    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:38:00.411938    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:38:00.412012    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:38:00.422946    8876 logs.go:276] 0 containers: []
	W0816 05:38:00.422957    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:38:00.423017    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:38:00.433763    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:38:00.433779    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:38:00.433784    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:38:00.447924    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:38:00.447937    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:38:00.462667    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:38:00.462679    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:38:00.478505    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:38:00.478516    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:38:00.496039    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:38:00.496052    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:38:00.511030    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:38:00.511043    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:38:00.526194    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:38:00.526205    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:38:00.563857    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:38:00.563869    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:38:00.578883    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:38:00.578896    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:38:00.591042    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:38:00.591054    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:38:00.602344    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:38:00.602354    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:38:00.626608    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:38:00.626618    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:38:00.641141    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:38:00.641152    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:38:00.658363    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:38:00.658375    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:38:00.696807    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:38:00.696817    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:38:00.700747    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:38:00.700754    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:38:00.737943    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:38:00.737956    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:38:03.907022    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:38:03.907416    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:38:03.941727    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:38:03.941870    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:38:03.962111    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:38:03.962206    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:38:03.977413    8654 logs.go:276] 4 containers: [d08c19c2b1cc 4f5615c53c6f e87bc196aca8 fbb13a6d2faf]
	I0816 05:38:03.977489    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:38:03.990396    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:38:03.990465    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:38:04.002646    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:38:04.002713    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:38:04.014218    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:38:04.014289    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:38:04.024682    8654 logs.go:276] 0 containers: []
	W0816 05:38:04.024696    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:38:04.024756    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:38:04.035807    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:38:04.035824    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:38:04.035833    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:38:04.040421    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:38:04.040428    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:38:04.063197    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:38:04.063206    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:38:04.097263    8654 logs.go:123] Gathering logs for coredns [4f5615c53c6f] ...
	I0816 05:38:04.097275    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5615c53c6f"
	I0816 05:38:04.108985    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:38:04.109000    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:38:04.120640    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:38:04.120650    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:38:04.132341    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:38:04.132354    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:38:04.145006    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:38:04.145017    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:38:03.250206    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:38:04.188648    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:38:04.188662    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:38:04.206599    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:38:04.206613    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:38:04.221787    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:38:04.221797    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:38:04.236712    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:38:04.236725    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:38:04.258966    8654 logs.go:123] Gathering logs for coredns [d08c19c2b1cc] ...
	I0816 05:38:04.258977    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08c19c2b1cc"
	I0816 05:38:04.270898    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:38:04.270909    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:38:04.286892    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:38:04.286904    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:38:06.804334    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:38:08.251105    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:38:08.251373    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:38:08.280321    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:38:08.280450    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:38:08.298230    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:38:08.298327    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:38:08.311960    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:38:08.312043    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:38:08.325200    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:38:08.325280    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:38:08.335855    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:38:08.335919    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:38:08.346658    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:38:08.346726    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:38:08.360841    8876 logs.go:276] 0 containers: []
	W0816 05:38:08.360851    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:38:08.360914    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:38:08.371096    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:38:08.371112    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:38:08.371117    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:38:08.411491    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:38:08.411503    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:38:08.457531    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:38:08.457546    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:38:08.472006    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:38:08.472018    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:38:08.486245    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:38:08.486256    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:38:08.501341    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:38:08.501351    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:38:08.525720    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:38:08.525728    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:38:08.530252    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:38:08.530259    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:38:08.548715    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:38:08.548728    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:38:08.565089    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:38:08.565099    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:38:08.576874    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:38:08.576886    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:38:08.588520    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:38:08.588531    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:38:08.600042    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:38:08.600052    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:38:08.637886    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:38:08.637902    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:38:08.652286    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:38:08.652299    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:38:08.663525    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:38:08.663536    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:38:08.675854    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:38:08.675870    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:38:11.189009    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:38:11.806728    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:38:11.806901    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:38:11.830373    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:38:11.830477    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:38:11.845651    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:38:11.845719    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:38:11.858302    8654 logs.go:276] 4 containers: [d08c19c2b1cc 4f5615c53c6f e87bc196aca8 fbb13a6d2faf]
	I0816 05:38:11.858371    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:38:11.869426    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:38:11.869490    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:38:11.880396    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:38:11.880468    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:38:11.890678    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:38:11.890741    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:38:11.901061    8654 logs.go:276] 0 containers: []
	W0816 05:38:11.901074    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:38:11.901138    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:38:11.914354    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:38:11.914370    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:38:11.914376    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:38:11.919341    8654 logs.go:123] Gathering logs for coredns [4f5615c53c6f] ...
	I0816 05:38:11.919347    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5615c53c6f"
	I0816 05:38:11.931197    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:38:11.931208    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:38:11.943087    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:38:11.943100    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:38:11.961747    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:38:11.961761    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:38:11.973623    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:38:11.973634    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:38:12.011266    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:38:12.011281    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:38:12.047193    8654 logs.go:123] Gathering logs for coredns [d08c19c2b1cc] ...
	I0816 05:38:12.047203    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08c19c2b1cc"
	I0816 05:38:12.059085    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:38:12.059094    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:38:12.077247    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:38:12.077256    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:38:12.091759    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:38:12.091773    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:38:12.105664    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:38:12.105677    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:38:12.117148    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:38:12.117158    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:38:12.142387    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:38:12.142396    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:38:12.154816    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:38:12.154830    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:38:16.191427    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:38:16.191894    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:38:16.235123    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:38:16.235283    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:38:16.256143    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:38:16.256248    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:38:16.270580    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:38:16.270660    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:38:16.284597    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:38:16.284680    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:38:16.295196    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:38:16.295271    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:38:16.310405    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:38:16.310476    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:38:16.321949    8876 logs.go:276] 0 containers: []
	W0816 05:38:16.321964    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:38:16.322030    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:38:16.332836    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:38:16.332853    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:38:16.332859    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:38:16.351795    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:38:16.351806    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:38:16.363727    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:38:16.363738    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:38:16.387429    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:38:16.387439    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:38:16.426041    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:38:16.426051    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:38:16.440292    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:38:16.440303    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:38:16.455449    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:38:16.455462    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:38:16.468381    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:38:16.468392    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:38:16.506160    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:38:16.506171    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:38:16.517481    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:38:16.517493    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:38:16.528974    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:38:16.528984    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:38:16.546132    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:38:16.546144    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:38:16.558697    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:38:16.558710    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:38:16.572864    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:38:16.572875    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:38:16.613364    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:38:16.613378    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:38:16.627042    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:38:16.627055    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:38:16.644412    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:38:16.644426    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:38:14.671819    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:38:19.150445    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:38:19.674056    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:38:19.674179    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:38:19.688219    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:38:19.688299    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:38:19.699768    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:38:19.699847    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:38:19.710129    8654 logs.go:276] 4 containers: [d08c19c2b1cc 4f5615c53c6f e87bc196aca8 fbb13a6d2faf]
	I0816 05:38:19.710197    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:38:19.721017    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:38:19.721088    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:38:19.731451    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:38:19.731520    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:38:19.744794    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:38:19.744869    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:38:19.755311    8654 logs.go:276] 0 containers: []
	W0816 05:38:19.755321    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:38:19.755380    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:38:19.765834    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:38:19.765852    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:38:19.765856    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:38:19.784061    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:38:19.784074    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:38:19.795780    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:38:19.795791    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:38:19.820743    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:38:19.820754    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:38:19.832849    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:38:19.832859    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:38:19.866856    8654 logs.go:123] Gathering logs for coredns [d08c19c2b1cc] ...
	I0816 05:38:19.866865    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08c19c2b1cc"
	I0816 05:38:19.878817    8654 logs.go:123] Gathering logs for coredns [4f5615c53c6f] ...
	I0816 05:38:19.878829    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5615c53c6f"
	I0816 05:38:19.890632    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:38:19.890642    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:38:19.902416    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:38:19.902428    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:38:19.939064    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:38:19.939079    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:38:19.953738    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:38:19.953748    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:38:19.965453    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:38:19.965464    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:38:19.969975    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:38:19.969984    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:38:19.986849    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:38:19.986861    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:38:20.004283    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:38:20.004294    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:38:22.518693    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:38:24.152710    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:38:24.152913    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:38:24.172474    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:38:24.172570    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:38:24.187985    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:38:24.188068    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:38:24.200485    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:38:24.200562    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:38:24.210885    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:38:24.210952    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:38:24.221589    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:38:24.221658    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:38:24.232752    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:38:24.232826    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:38:24.243070    8876 logs.go:276] 0 containers: []
	W0816 05:38:24.243081    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:38:24.243142    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:38:24.253398    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:38:24.253415    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:38:24.253420    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:38:24.268046    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:38:24.268056    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:38:24.280294    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:38:24.280305    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:38:24.284673    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:38:24.284679    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:38:24.298519    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:38:24.298533    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:38:24.340237    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:38:24.340249    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:38:24.350884    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:38:24.350896    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:38:24.373072    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:38:24.373082    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:38:24.407916    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:38:24.407931    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:38:24.426982    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:38:24.426996    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:38:24.438789    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:38:24.438801    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:38:24.453595    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:38:24.453606    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:38:24.465237    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:38:24.465247    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:38:24.478722    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:38:24.478735    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:38:24.497291    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:38:24.497304    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:38:24.511269    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:38:24.511280    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:38:24.523081    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:38:24.523095    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:38:27.061442    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:38:27.520821    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:38:27.520935    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:38:27.532296    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:38:27.532372    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:38:27.542822    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:38:27.542889    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:38:27.553830    8654 logs.go:276] 4 containers: [d08c19c2b1cc 4f5615c53c6f e87bc196aca8 fbb13a6d2faf]
	I0816 05:38:27.553911    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:38:27.564282    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:38:27.564351    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:38:27.574960    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:38:27.575025    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:38:27.586150    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:38:27.586225    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:38:27.596303    8654 logs.go:276] 0 containers: []
	W0816 05:38:27.596314    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:38:27.596376    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:38:27.608504    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:38:27.608519    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:38:27.608525    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:38:27.644393    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:38:27.644409    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:38:27.659202    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:38:27.659222    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:38:27.672880    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:38:27.672895    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:38:27.689911    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:38:27.689925    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:38:27.702324    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:38:27.702335    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:38:27.714808    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:38:27.714819    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:38:27.732338    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:38:27.732349    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:38:27.744941    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:38:27.744959    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:38:27.749284    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:38:27.749290    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:38:27.785429    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:38:27.785440    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:38:27.797307    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:38:27.797316    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:38:27.821972    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:38:27.821983    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:38:27.837516    8654 logs.go:123] Gathering logs for coredns [d08c19c2b1cc] ...
	I0816 05:38:27.837524    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08c19c2b1cc"
	I0816 05:38:27.848988    8654 logs.go:123] Gathering logs for coredns [4f5615c53c6f] ...
	I0816 05:38:27.848998    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5615c53c6f"
	I0816 05:38:32.061963    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:38:32.062233    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:38:32.087970    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:38:32.088095    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:38:32.104648    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:38:32.104739    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:38:32.117991    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:38:32.118071    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:38:32.129767    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:38:32.129847    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:38:32.141621    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:38:32.141690    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:38:32.152215    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:38:32.152287    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:38:32.162657    8876 logs.go:276] 0 containers: []
	W0816 05:38:32.162669    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:38:32.162734    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:38:32.173257    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:38:32.173274    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:38:32.173281    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:38:32.187422    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:38:32.187432    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:38:32.204777    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:38:32.204789    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:38:32.216090    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:38:32.216104    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:38:32.229942    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:38:32.229951    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:38:32.243960    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:38:32.243970    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:38:32.259521    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:38:32.259533    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:38:32.272445    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:38:32.272456    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:38:32.285075    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:38:32.285089    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:38:32.319670    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:38:32.319684    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:38:32.331660    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:38:32.331673    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:38:32.344999    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:38:32.345009    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:38:32.367505    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:38:32.367516    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:38:32.378405    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:38:32.378417    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:38:32.382473    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:38:32.382482    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:38:32.420836    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:38:32.420848    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:38:32.439734    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:38:32.439747    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:38:30.362921    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:38:34.980610    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:38:35.365307    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:38:35.365494    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:38:35.385831    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:38:35.385919    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:38:35.399739    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:38:35.399810    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:38:35.415865    8654 logs.go:276] 4 containers: [d08c19c2b1cc 4f5615c53c6f e87bc196aca8 fbb13a6d2faf]
	I0816 05:38:35.415940    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:38:35.427115    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:38:35.427190    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:38:35.437858    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:38:35.437934    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:38:35.448903    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:38:35.448976    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:38:35.460064    8654 logs.go:276] 0 containers: []
	W0816 05:38:35.460075    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:38:35.460135    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:38:35.470363    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:38:35.470380    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:38:35.470385    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:38:35.496167    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:38:35.496179    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:38:35.508782    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:38:35.508792    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:38:35.545176    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:38:35.545190    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:38:35.563706    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:38:35.563720    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:38:35.577158    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:38:35.577171    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:38:35.591139    8654 logs.go:123] Gathering logs for coredns [d08c19c2b1cc] ...
	I0816 05:38:35.591152    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08c19c2b1cc"
	I0816 05:38:35.602763    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:38:35.602776    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:38:35.615722    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:38:35.615733    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:38:35.627422    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:38:35.627435    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:38:35.664300    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:38:35.664309    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:38:35.668501    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:38:35.668508    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:38:35.683340    8654 logs.go:123] Gathering logs for coredns [4f5615c53c6f] ...
	I0816 05:38:35.683352    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5615c53c6f"
	I0816 05:38:35.695547    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:38:35.695561    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:38:35.710494    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:38:35.710509    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:38:38.225248    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:38:39.983090    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:38:39.983220    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:38:39.994626    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:38:39.994709    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:38:40.005604    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:38:40.005681    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:38:40.016941    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:38:40.017015    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:38:40.027413    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:38:40.027494    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:38:40.038201    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:38:40.038265    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:38:40.048434    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:38:40.048507    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:38:40.059056    8876 logs.go:276] 0 containers: []
	W0816 05:38:40.059069    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:38:40.059131    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:38:40.069480    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:38:40.069498    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:38:40.069503    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:38:40.083818    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:38:40.083831    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:38:40.099114    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:38:40.099124    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:38:40.116270    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:38:40.116282    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:38:40.131266    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:38:40.131279    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:38:40.143226    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:38:40.143239    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:38:40.155137    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:38:40.155151    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:38:40.176606    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:38:40.176616    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:38:40.180914    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:38:40.180923    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:38:40.216998    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:38:40.217010    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:38:40.254764    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:38:40.254775    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:38:40.269362    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:38:40.269376    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:38:40.281061    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:38:40.281072    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:38:40.294646    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:38:40.294662    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:38:40.306530    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:38:40.306541    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:38:40.344038    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:38:40.344047    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:38:40.357900    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:38:40.357913    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:38:42.871300    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:38:43.227870    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:38:43.228360    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:38:43.267090    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:38:43.267228    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:38:43.287479    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:38:43.287574    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:38:43.306565    8654 logs.go:276] 4 containers: [d08c19c2b1cc 4f5615c53c6f e87bc196aca8 fbb13a6d2faf]
	I0816 05:38:43.306645    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:38:43.317630    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:38:43.317709    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:38:43.328634    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:38:43.328706    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:38:43.339082    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:38:43.339163    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:38:43.349802    8654 logs.go:276] 0 containers: []
	W0816 05:38:43.349813    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:38:43.349876    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:38:43.359918    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:38:43.359933    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:38:43.359939    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:38:43.371592    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:38:43.371601    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:38:43.411388    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:38:43.411398    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:38:43.426387    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:38:43.426398    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:38:43.445999    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:38:43.446011    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:38:43.464154    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:38:43.464168    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:38:43.487636    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:38:43.487648    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:38:43.499248    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:38:43.499259    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:38:43.516252    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:38:43.516266    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:38:43.531174    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:38:43.531184    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:38:43.553907    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:38:43.553917    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:38:43.591224    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:38:43.591234    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:38:43.603311    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:38:43.603321    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:38:43.607626    8654 logs.go:123] Gathering logs for coredns [d08c19c2b1cc] ...
	I0816 05:38:43.607635    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08c19c2b1cc"
	I0816 05:38:43.619205    8654 logs.go:123] Gathering logs for coredns [4f5615c53c6f] ...
	I0816 05:38:43.619218    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5615c53c6f"
	I0816 05:38:47.874088    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:38:47.874277    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:38:47.893724    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:38:47.893817    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:38:47.906901    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:38:47.906978    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:38:47.918837    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:38:47.918902    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:38:47.929442    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:38:47.929517    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:38:47.939539    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:38:47.939610    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:38:47.950792    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:38:47.950867    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:38:47.961402    8876 logs.go:276] 0 containers: []
	W0816 05:38:47.961414    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:38:47.961472    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:38:47.971652    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:38:47.971668    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:38:47.971673    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:38:48.010559    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:38:48.010570    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:38:48.025255    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:38:48.025271    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:38:48.039777    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:38:48.039787    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:38:48.078213    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:38:48.078220    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:38:48.091811    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:38:48.091824    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:38:48.103312    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:38:48.103322    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:38:48.137459    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:38:48.137471    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:38:48.154848    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:38:48.154859    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:38:46.133664    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:38:48.166141    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:38:48.166151    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:38:48.183374    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:38:48.183386    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:38:48.201326    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:38:48.201338    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:38:48.214734    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:38:48.214747    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:38:48.225928    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:38:48.225943    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:38:48.249274    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:38:48.249288    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:38:48.253802    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:38:48.253811    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:38:48.266180    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:38:48.266192    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:38:50.785997    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:38:51.135352    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:38:51.135610    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:38:51.165158    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:38:51.165262    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:38:51.180726    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:38:51.180806    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:38:51.195733    8654 logs.go:276] 4 containers: [d08c19c2b1cc 4f5615c53c6f e87bc196aca8 fbb13a6d2faf]
	I0816 05:38:51.195811    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:38:51.208147    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:38:51.208220    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:38:51.218344    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:38:51.218413    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:38:51.229920    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:38:51.229994    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:38:51.240273    8654 logs.go:276] 0 containers: []
	W0816 05:38:51.240284    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:38:51.240345    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:38:51.251290    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:38:51.251309    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:38:51.251315    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:38:51.255695    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:38:51.255705    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:38:51.291280    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:38:51.291294    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:38:51.305547    8654 logs.go:123] Gathering logs for coredns [4f5615c53c6f] ...
	I0816 05:38:51.305558    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5615c53c6f"
	I0816 05:38:51.316908    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:38:51.316919    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:38:51.352744    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:38:51.352753    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:38:51.367826    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:38:51.367841    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:38:51.379403    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:38:51.379413    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:38:51.390327    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:38:51.390340    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:38:51.404515    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:38:51.404526    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:38:51.416162    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:38:51.416173    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:38:51.428550    8654 logs.go:123] Gathering logs for coredns [d08c19c2b1cc] ...
	I0816 05:38:51.428563    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08c19c2b1cc"
	I0816 05:38:51.440862    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:38:51.440872    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:38:51.460251    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:38:51.460265    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:38:51.478501    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:38:51.478511    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:38:54.004699    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:38:55.788250    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:38:55.788310    8876 kubeadm.go:597] duration metric: took 4m4.162423416s to restartPrimaryControlPlane
	W0816 05:38:55.788368    8876 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 05:38:55.788390    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0816 05:38:56.797072    8876 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.008688292s)
	I0816 05:38:56.797153    8876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 05:38:56.802158    8876 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 05:38:56.804824    8876 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 05:38:56.807474    8876 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 05:38:56.807480    8876 kubeadm.go:157] found existing configuration files:
	
	I0816 05:38:56.807500    8876 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51397 /etc/kubernetes/admin.conf
	I0816 05:38:56.809893    8876 kubeadm.go:163] "https://control-plane.minikube.internal:51397" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51397 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 05:38:56.809915    8876 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 05:38:56.812745    8876 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51397 /etc/kubernetes/kubelet.conf
	I0816 05:38:56.816002    8876 kubeadm.go:163] "https://control-plane.minikube.internal:51397" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51397 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 05:38:56.816038    8876 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 05:38:56.818796    8876 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51397 /etc/kubernetes/controller-manager.conf
	I0816 05:38:56.821322    8876 kubeadm.go:163] "https://control-plane.minikube.internal:51397" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51397 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 05:38:56.821342    8876 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 05:38:56.824461    8876 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51397 /etc/kubernetes/scheduler.conf
	I0816 05:38:56.827563    8876 kubeadm.go:163] "https://control-plane.minikube.internal:51397" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51397 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 05:38:56.827585    8876 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 05:38:56.830330    8876 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 05:38:56.847746    8876 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0816 05:38:56.847775    8876 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 05:38:56.896700    8876 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 05:38:56.896786    8876 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 05:38:56.896858    8876 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 05:38:56.945877    8876 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 05:38:56.950081    8876 out.go:235]   - Generating certificates and keys ...
	I0816 05:38:56.950181    8876 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 05:38:56.950304    8876 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 05:38:56.950344    8876 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 05:38:56.950375    8876 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 05:38:56.950409    8876 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 05:38:56.950436    8876 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 05:38:56.950530    8876 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 05:38:56.950594    8876 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 05:38:56.950684    8876 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 05:38:56.950725    8876 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 05:38:56.950747    8876 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 05:38:56.950774    8876 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 05:38:57.006726    8876 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 05:38:57.046099    8876 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 05:38:57.194402    8876 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 05:38:57.297786    8876 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 05:38:57.325562    8876 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 05:38:57.325989    8876 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 05:38:57.326080    8876 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 05:38:57.409045    8876 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 05:38:57.413003    8876 out.go:235]   - Booting up control plane ...
	I0816 05:38:57.413051    8876 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 05:38:57.413094    8876 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 05:38:57.413131    8876 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 05:38:57.413174    8876 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 05:38:57.413342    8876 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 05:38:59.006818    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:38:59.006922    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:38:59.019074    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:38:59.019153    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:38:59.030582    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:38:59.030658    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:38:59.043939    8654 logs.go:276] 4 containers: [d08c19c2b1cc 4f5615c53c6f e87bc196aca8 fbb13a6d2faf]
	I0816 05:38:59.044025    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:38:59.054962    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:38:59.055030    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:38:59.065556    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:38:59.065631    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:38:59.076488    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:38:59.076556    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:38:59.086865    8654 logs.go:276] 0 containers: []
	W0816 05:38:59.086877    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:38:59.086943    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:38:59.097555    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:38:59.097574    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:38:59.097580    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:38:59.109999    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:38:59.110011    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:38:59.126040    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:38:59.126051    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:38:59.145342    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:38:59.145353    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:39:01.915691    8876 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501938 seconds
	I0816 05:39:01.915780    8876 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 05:39:01.920954    8876 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 05:39:02.429208    8876 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 05:39:02.429319    8876 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-972000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 05:39:02.932912    8876 kubeadm.go:310] [bootstrap-token] Using token: nyvah2.rc2dbnw87lmpdpnb
	I0816 05:39:02.936361    8876 out.go:235]   - Configuring RBAC rules ...
	I0816 05:39:02.936460    8876 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 05:39:02.936512    8876 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 05:39:02.954204    8876 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 05:39:02.955040    8876 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 05:39:02.955876    8876 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 05:39:02.956640    8876 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 05:39:02.959925    8876 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 05:39:03.145409    8876 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 05:39:03.337473    8876 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 05:39:03.338436    8876 kubeadm.go:310] 
	I0816 05:39:03.338469    8876 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 05:39:03.338476    8876 kubeadm.go:310] 
	I0816 05:39:03.338519    8876 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 05:39:03.338522    8876 kubeadm.go:310] 
	I0816 05:39:03.338534    8876 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 05:39:03.338638    8876 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 05:39:03.338663    8876 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 05:39:03.338672    8876 kubeadm.go:310] 
	I0816 05:39:03.338704    8876 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 05:39:03.338711    8876 kubeadm.go:310] 
	I0816 05:39:03.338768    8876 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 05:39:03.338774    8876 kubeadm.go:310] 
	I0816 05:39:03.338833    8876 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 05:39:03.338910    8876 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 05:39:03.338977    8876 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 05:39:03.339040    8876 kubeadm.go:310] 
	I0816 05:39:03.339079    8876 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 05:39:03.339163    8876 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 05:39:03.339168    8876 kubeadm.go:310] 
	I0816 05:39:03.339219    8876 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token nyvah2.rc2dbnw87lmpdpnb \
	I0816 05:39:03.339313    8876 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:23cf10825d548a004e2d3ef8e1c65218486081db837b36803636fece4fac457f \
	I0816 05:39:03.339327    8876 kubeadm.go:310] 	--control-plane 
	I0816 05:39:03.339330    8876 kubeadm.go:310] 
	I0816 05:39:03.339472    8876 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 05:39:03.339480    8876 kubeadm.go:310] 
	I0816 05:39:03.339532    8876 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token nyvah2.rc2dbnw87lmpdpnb \
	I0816 05:39:03.339591    8876 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:23cf10825d548a004e2d3ef8e1c65218486081db837b36803636fece4fac457f 
	I0816 05:39:03.339649    8876 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 05:39:03.339666    8876 cni.go:84] Creating CNI manager for ""
	I0816 05:39:03.339675    8876 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 05:39:03.342807    8876 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 05:39:03.348834    8876 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 05:39:03.352877    8876 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 05:39:03.357761    8876 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 05:39:03.357813    8876 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 05:39:03.357829    8876 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-972000 minikube.k8s.io/updated_at=2024_08_16T05_39_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05 minikube.k8s.io/name=stopped-upgrade-972000 minikube.k8s.io/primary=true
	I0816 05:39:03.405808    8876 kubeadm.go:1113] duration metric: took 48.036584ms to wait for elevateKubeSystemPrivileges
	I0816 05:39:03.405838    8876 ops.go:34] apiserver oom_adj: -16
	I0816 05:39:03.405845    8876 kubeadm.go:394] duration metric: took 4m11.794322208s to StartCluster
	I0816 05:39:03.405855    8876 settings.go:142] acquiring lock: {Name:mkec9dae897ed6cd1355cb2ba10161c54c163fba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:39:03.405948    8876 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:39:03.406353    8876 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-6249/kubeconfig: {Name:mka7b2a1dac03f0ea4ac28563b4fe884a2b1b206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:39:03.406551    8876 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:39:03.406594    8876 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 05:39:03.406641    8876 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-972000"
	I0816 05:39:03.406658    8876 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-972000"
	W0816 05:39:03.406661    8876 addons.go:243] addon storage-provisioner should already be in state true
	I0816 05:39:03.406672    8876 host.go:66] Checking if "stopped-upgrade-972000" exists ...
	I0816 05:39:03.406675    8876 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-972000"
	I0816 05:39:03.406697    8876 config.go:182] Loaded profile config "stopped-upgrade-972000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0816 05:39:03.406743    8876 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-972000"
	I0816 05:39:03.407824    8876 kapi.go:59] client config for stopped-upgrade-972000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000/client.key", CAFile:"/Users/jenkins/minikube-integration/19423-6249/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101e55610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 05:39:03.407939    8876 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-972000"
	W0816 05:39:03.407943    8876 addons.go:243] addon default-storageclass should already be in state true
	I0816 05:39:03.407949    8876 host.go:66] Checking if "stopped-upgrade-972000" exists ...
	I0816 05:39:03.410809    8876 out.go:177] * Verifying Kubernetes components...
	I0816 05:39:03.411160    8876 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 05:39:03.414987    8876 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 05:39:03.414993    8876 sshutil.go:53] new ssh client: &{IP:localhost Port:51362 SSHKeyPath:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/stopped-upgrade-972000/id_rsa Username:docker}
	I0816 05:39:03.418792    8876 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 05:38:59.175248    8654 logs.go:123] Gathering logs for coredns [4f5615c53c6f] ...
	I0816 05:38:59.175263    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5615c53c6f"
	I0816 05:38:59.188519    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:38:59.188530    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:38:59.201231    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:38:59.201243    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:38:59.214156    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:38:59.214167    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:38:59.226234    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:38:59.226245    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:38:59.265713    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:38:59.265735    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:38:59.271044    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:38:59.271061    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:38:59.286063    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:38:59.286075    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:38:59.301364    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:38:59.301379    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:38:59.314005    8654 logs.go:123] Gathering logs for coredns [d08c19c2b1cc] ...
	I0816 05:38:59.314018    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08c19c2b1cc"
	I0816 05:38:59.327274    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:38:59.327285    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:39:01.868988    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:39:03.422855    8876 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:39:03.424199    8876 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 05:39:03.424205    8876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 05:39:03.424211    8876 sshutil.go:53] new ssh client: &{IP:localhost Port:51362 SSHKeyPath:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/stopped-upgrade-972000/id_rsa Username:docker}
	I0816 05:39:03.502784    8876 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 05:39:03.507594    8876 api_server.go:52] waiting for apiserver process to appear ...
	I0816 05:39:03.507639    8876 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 05:39:03.511209    8876 api_server.go:72] duration metric: took 104.647709ms to wait for apiserver process to appear ...
	I0816 05:39:03.511218    8876 api_server.go:88] waiting for apiserver healthz status ...
	I0816 05:39:03.511226    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:39:03.549327    8876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 05:39:03.565049    8876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 05:39:03.930805    8876 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0816 05:39:03.930819    8876 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0816 05:39:06.871186    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:39:06.871365    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:39:06.884114    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:39:06.884199    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:39:06.894964    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:39:06.895037    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:39:06.905937    8654 logs.go:276] 4 containers: [d08c19c2b1cc 4f5615c53c6f e87bc196aca8 fbb13a6d2faf]
	I0816 05:39:06.906009    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:39:06.919201    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:39:06.919273    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:39:06.929786    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:39:06.929859    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:39:06.940768    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:39:06.940836    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:39:06.950783    8654 logs.go:276] 0 containers: []
	W0816 05:39:06.950798    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:39:06.950852    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:39:06.961624    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:39:06.961647    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:39:06.961653    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:39:06.973188    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:39:06.973202    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:39:07.008316    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:39:07.008327    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:39:07.022239    8654 logs.go:123] Gathering logs for coredns [d08c19c2b1cc] ...
	I0816 05:39:07.022250    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08c19c2b1cc"
	I0816 05:39:07.033833    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:39:07.033844    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:39:07.050594    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:39:07.050607    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:39:07.075156    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:39:07.075166    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:39:07.111737    8654 logs.go:123] Gathering logs for coredns [4f5615c53c6f] ...
	I0816 05:39:07.111746    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5615c53c6f"
	I0816 05:39:07.123705    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:39:07.123715    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:39:07.135489    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:39:07.135501    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:39:07.149954    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:39:07.149969    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:39:07.161817    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:39:07.161828    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:39:07.174176    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:39:07.174187    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:39:07.178795    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:39:07.178804    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:39:07.193993    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:39:07.194003    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:39:08.513333    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:39:08.513369    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:39:09.708049    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:39:13.513675    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:39:13.513713    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:39:14.710249    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:39:14.710342    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:39:14.721814    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:39:14.721892    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:39:14.734176    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:39:14.734250    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:39:14.745865    8654 logs.go:276] 4 containers: [d08c19c2b1cc 4f5615c53c6f e87bc196aca8 fbb13a6d2faf]
	I0816 05:39:14.745937    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:39:14.756263    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:39:14.756334    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:39:14.767335    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:39:14.767409    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:39:14.777852    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:39:14.777917    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:39:14.795589    8654 logs.go:276] 0 containers: []
	W0816 05:39:14.795603    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:39:14.795665    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:39:14.807533    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:39:14.807551    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:39:14.807556    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:39:14.824316    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:39:14.824327    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:39:14.836583    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:39:14.836593    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:39:14.875267    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:39:14.875279    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:39:14.887204    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:39:14.887216    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:39:14.899626    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:39:14.899636    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:39:14.917601    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:39:14.917611    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:39:14.933919    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:39:14.933928    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:39:14.948327    8654 logs.go:123] Gathering logs for coredns [d08c19c2b1cc] ...
	I0816 05:39:14.948338    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08c19c2b1cc"
	I0816 05:39:14.960530    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:39:14.960541    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:39:14.974462    8654 logs.go:123] Gathering logs for coredns [4f5615c53c6f] ...
	I0816 05:39:14.974476    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5615c53c6f"
	I0816 05:39:14.985850    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:39:14.985861    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:39:14.997691    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:39:14.997701    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:39:15.022537    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:39:15.022547    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:39:15.058988    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:39:15.059001    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:39:17.565474    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:39:18.514145    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:39:18.514205    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:39:22.567627    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:39:22.567831    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:39:22.586809    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:39:22.586889    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:39:22.601562    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:39:22.601643    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:39:22.616661    8654 logs.go:276] 4 containers: [d08c19c2b1cc 4f5615c53c6f e87bc196aca8 fbb13a6d2faf]
	I0816 05:39:22.616742    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:39:22.627735    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:39:22.627808    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:39:22.638205    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:39:22.638271    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:39:22.648526    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:39:22.648591    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:39:22.658701    8654 logs.go:276] 0 containers: []
	W0816 05:39:22.658716    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:39:22.658777    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:39:22.669375    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:39:22.669394    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:39:22.669400    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:39:22.683716    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:39:22.683726    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:39:22.697256    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:39:22.697267    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:39:22.732298    8654 logs.go:123] Gathering logs for coredns [4f5615c53c6f] ...
	I0816 05:39:22.732310    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5615c53c6f"
	I0816 05:39:22.743888    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:39:22.743901    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:39:22.755725    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:39:22.755737    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:39:22.773823    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:39:22.773834    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:39:22.789819    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:39:22.789830    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:39:22.804463    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:39:22.804474    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:39:22.816357    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:39:22.816367    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:39:22.840804    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:39:22.840814    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:39:22.878334    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:39:22.878344    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:39:22.882783    8654 logs.go:123] Gathering logs for coredns [d08c19c2b1cc] ...
	I0816 05:39:22.882789    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08c19c2b1cc"
	I0816 05:39:22.894411    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:39:22.894421    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:39:22.906221    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:39:22.906231    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:39:23.514612    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:39:23.514655    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:39:25.425833    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:39:28.515310    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:39:28.515355    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:39:33.516154    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:39:33.516193    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0816 05:39:33.932710    8876 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0816 05:39:33.937608    8876 out.go:177] * Enabled addons: storage-provisioner
	I0816 05:39:30.428374    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:39:30.428569    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:39:30.448292    8654 logs.go:276] 1 containers: [7e7027a018f3]
	I0816 05:39:30.448381    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:39:30.462191    8654 logs.go:276] 1 containers: [0f8987cebd88]
	I0816 05:39:30.462272    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:39:30.476088    8654 logs.go:276] 4 containers: [d08c19c2b1cc 4f5615c53c6f e87bc196aca8 fbb13a6d2faf]
	I0816 05:39:30.476160    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:39:30.487003    8654 logs.go:276] 1 containers: [927f9bdc4d05]
	I0816 05:39:30.487079    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:39:30.505229    8654 logs.go:276] 1 containers: [9d07cdf1cffb]
	I0816 05:39:30.505295    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:39:30.515572    8654 logs.go:276] 1 containers: [8af46eabd188]
	I0816 05:39:30.515641    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:39:30.525999    8654 logs.go:276] 0 containers: []
	W0816 05:39:30.526014    8654 logs.go:278] No container was found matching "kindnet"
	I0816 05:39:30.526073    8654 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:39:30.546808    8654 logs.go:276] 1 containers: [af1a471fe36f]
	I0816 05:39:30.546823    8654 logs.go:123] Gathering logs for coredns [fbb13a6d2faf] ...
	I0816 05:39:30.546828    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb13a6d2faf"
	I0816 05:39:30.558636    8654 logs.go:123] Gathering logs for kube-controller-manager [8af46eabd188] ...
	I0816 05:39:30.558648    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af46eabd188"
	I0816 05:39:30.576207    8654 logs.go:123] Gathering logs for Docker ...
	I0816 05:39:30.576217    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:39:30.600338    8654 logs.go:123] Gathering logs for dmesg ...
	I0816 05:39:30.600347    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:39:30.605177    8654 logs.go:123] Gathering logs for kube-apiserver [7e7027a018f3] ...
	I0816 05:39:30.605186    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7027a018f3"
	I0816 05:39:30.619305    8654 logs.go:123] Gathering logs for coredns [d08c19c2b1cc] ...
	I0816 05:39:30.619318    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08c19c2b1cc"
	I0816 05:39:30.642000    8654 logs.go:123] Gathering logs for kube-scheduler [927f9bdc4d05] ...
	I0816 05:39:30.642009    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927f9bdc4d05"
	I0816 05:39:30.656881    8654 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:39:30.656892    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:39:30.691844    8654 logs.go:123] Gathering logs for etcd [0f8987cebd88] ...
	I0816 05:39:30.691854    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8987cebd88"
	I0816 05:39:30.705664    8654 logs.go:123] Gathering logs for coredns [e87bc196aca8] ...
	I0816 05:39:30.705675    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87bc196aca8"
	I0816 05:39:30.723331    8654 logs.go:123] Gathering logs for storage-provisioner [af1a471fe36f] ...
	I0816 05:39:30.723342    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af1a471fe36f"
	I0816 05:39:30.735096    8654 logs.go:123] Gathering logs for container status ...
	I0816 05:39:30.735105    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:39:30.746780    8654 logs.go:123] Gathering logs for kubelet ...
	I0816 05:39:30.746792    8654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:39:30.784006    8654 logs.go:123] Gathering logs for coredns [4f5615c53c6f] ...
	I0816 05:39:30.784018    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5615c53c6f"
	I0816 05:39:30.795172    8654 logs.go:123] Gathering logs for kube-proxy [9d07cdf1cffb] ...
	I0816 05:39:30.795185    8654 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d07cdf1cffb"
	I0816 05:39:33.309471    8654 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:39:33.946479    8876 addons.go:510] duration metric: took 30.540411458s for enable addons: enabled=[storage-provisioner]
	I0816 05:39:38.311616    8654 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:39:38.316405    8654 out.go:201] 
	W0816 05:39:38.317983    8654 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0816 05:39:38.317988    8654 out.go:270] * 
	W0816 05:39:38.318413    8654 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:39:38.328334    8654 out.go:201] 
	I0816 05:39:38.516776    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:39:38.516859    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:39:43.517959    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:39:43.517990    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:39:48.519933    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:39:48.519967    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Fri 2024-08-16 12:30:38 UTC, ends at Fri 2024-08-16 12:39:54 UTC. --
	Aug 16 12:39:38 running-upgrade-607000 dockerd[3222]: time="2024-08-16T12:39:38.476605399Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/47629bbc924e2bc32e634ea16b62188864c7dc5d00278f687878619800ee1e34 pid=19141 runtime=io.containerd.runc.v2
	Aug 16 12:39:38 running-upgrade-607000 cri-dockerd[3062]: time="2024-08-16T12:39:38Z" level=error msg="ContainerStats resp: {0x400083b540 linux}"
	Aug 16 12:39:38 running-upgrade-607000 cri-dockerd[3062]: time="2024-08-16T12:39:38Z" level=error msg="ContainerStats resp: {0x40005c7180 linux}"
	Aug 16 12:39:39 running-upgrade-607000 cri-dockerd[3062]: time="2024-08-16T12:39:39Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 16 12:39:39 running-upgrade-607000 cri-dockerd[3062]: time="2024-08-16T12:39:39Z" level=error msg="ContainerStats resp: {0x40008b3a80 linux}"
	Aug 16 12:39:40 running-upgrade-607000 cri-dockerd[3062]: time="2024-08-16T12:39:40Z" level=error msg="ContainerStats resp: {0x400059c180 linux}"
	Aug 16 12:39:40 running-upgrade-607000 cri-dockerd[3062]: time="2024-08-16T12:39:40Z" level=error msg="ContainerStats resp: {0x400059c440 linux}"
	Aug 16 12:39:40 running-upgrade-607000 cri-dockerd[3062]: time="2024-08-16T12:39:40Z" level=error msg="ContainerStats resp: {0x40009b4880 linux}"
	Aug 16 12:39:40 running-upgrade-607000 cri-dockerd[3062]: time="2024-08-16T12:39:40Z" level=error msg="ContainerStats resp: {0x400059cd40 linux}"
	Aug 16 12:39:40 running-upgrade-607000 cri-dockerd[3062]: time="2024-08-16T12:39:40Z" level=error msg="ContainerStats resp: {0x400059cf00 linux}"
	Aug 16 12:39:40 running-upgrade-607000 cri-dockerd[3062]: time="2024-08-16T12:39:40Z" level=error msg="ContainerStats resp: {0x400093a040 linux}"
	Aug 16 12:39:40 running-upgrade-607000 cri-dockerd[3062]: time="2024-08-16T12:39:40Z" level=error msg="ContainerStats resp: {0x400093a640 linux}"
	Aug 16 12:39:44 running-upgrade-607000 cri-dockerd[3062]: time="2024-08-16T12:39:44Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 16 12:39:49 running-upgrade-607000 cri-dockerd[3062]: time="2024-08-16T12:39:49Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 16 12:39:50 running-upgrade-607000 cri-dockerd[3062]: time="2024-08-16T12:39:50Z" level=error msg="ContainerStats resp: {0x400041e8c0 linux}"
	Aug 16 12:39:50 running-upgrade-607000 cri-dockerd[3062]: time="2024-08-16T12:39:50Z" level=error msg="ContainerStats resp: {0x400041fd00 linux}"
	Aug 16 12:39:51 running-upgrade-607000 cri-dockerd[3062]: time="2024-08-16T12:39:51Z" level=error msg="ContainerStats resp: {0x40008b38c0 linux}"
	Aug 16 12:39:52 running-upgrade-607000 cri-dockerd[3062]: time="2024-08-16T12:39:52Z" level=error msg="ContainerStats resp: {0x40009b42c0 linux}"
	Aug 16 12:39:52 running-upgrade-607000 cri-dockerd[3062]: time="2024-08-16T12:39:52Z" level=error msg="ContainerStats resp: {0x40009b46c0 linux}"
	Aug 16 12:39:52 running-upgrade-607000 cri-dockerd[3062]: time="2024-08-16T12:39:52Z" level=error msg="ContainerStats resp: {0x400059d000 linux}"
	Aug 16 12:39:52 running-upgrade-607000 cri-dockerd[3062]: time="2024-08-16T12:39:52Z" level=error msg="ContainerStats resp: {0x40009b4fc0 linux}"
	Aug 16 12:39:52 running-upgrade-607000 cri-dockerd[3062]: time="2024-08-16T12:39:52Z" level=error msg="ContainerStats resp: {0x400059d8c0 linux}"
	Aug 16 12:39:52 running-upgrade-607000 cri-dockerd[3062]: time="2024-08-16T12:39:52Z" level=error msg="ContainerStats resp: {0x400059de80 linux}"
	Aug 16 12:39:52 running-upgrade-607000 cri-dockerd[3062]: time="2024-08-16T12:39:52Z" level=error msg="ContainerStats resp: {0x400093a200 linux}"
	Aug 16 12:39:54 running-upgrade-607000 cri-dockerd[3062]: time="2024-08-16T12:39:54Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	47629bbc924e2       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   b32bafe96297a
	1d304cb4caec8       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   7247c06fb8054
	d08c19c2b1cc5       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   7247c06fb8054
	4f5615c53c6fb       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   b32bafe96297a
	9d07cdf1cffb5       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   076ba2ce9675c
	af1a471fe36f1       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   22d06f9ebc3b9
	927f9bdc4d059       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   38090fb1363ee
	0f8987cebd88e       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   38095cea34f2f
	7e7027a018f38       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   ec350b2a90d46
	8af46eabd1880       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   8b274a37e22be
	
	
	==> coredns [1d304cb4caec] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6647663743933886692.2824375236703881142. HINFO: read udp 10.244.0.3:56312->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6647663743933886692.2824375236703881142. HINFO: read udp 10.244.0.3:54563->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6647663743933886692.2824375236703881142. HINFO: read udp 10.244.0.3:50091->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6647663743933886692.2824375236703881142. HINFO: read udp 10.244.0.3:41168->10.0.2.3:53: i/o timeout
	
	
	==> coredns [47629bbc924e] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1001106554642012507.822546656621259366. HINFO: read udp 10.244.0.2:40434->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1001106554642012507.822546656621259366. HINFO: read udp 10.244.0.2:49020->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1001106554642012507.822546656621259366. HINFO: read udp 10.244.0.2:42688->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1001106554642012507.822546656621259366. HINFO: read udp 10.244.0.2:44350->10.0.2.3:53: i/o timeout
	
	
	==> coredns [4f5615c53c6f] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 300183563015251711.725050452404911806. HINFO: read udp 10.244.0.2:47395->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 300183563015251711.725050452404911806. HINFO: read udp 10.244.0.2:55232->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 300183563015251711.725050452404911806. HINFO: read udp 10.244.0.2:49016->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 300183563015251711.725050452404911806. HINFO: read udp 10.244.0.2:48396->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 300183563015251711.725050452404911806. HINFO: read udp 10.244.0.2:49091->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 300183563015251711.725050452404911806. HINFO: read udp 10.244.0.2:58805->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 300183563015251711.725050452404911806. HINFO: read udp 10.244.0.2:51515->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 300183563015251711.725050452404911806. HINFO: read udp 10.244.0.2:35485->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 300183563015251711.725050452404911806. HINFO: read udp 10.244.0.2:59231->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 300183563015251711.725050452404911806. HINFO: read udp 10.244.0.2:44434->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d08c19c2b1cc] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5832259562347923126.4164958317637630919. HINFO: read udp 10.244.0.3:44030->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5832259562347923126.4164958317637630919. HINFO: read udp 10.244.0.3:44536->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5832259562347923126.4164958317637630919. HINFO: read udp 10.244.0.3:39615->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5832259562347923126.4164958317637630919. HINFO: read udp 10.244.0.3:56250->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5832259562347923126.4164958317637630919. HINFO: read udp 10.244.0.3:40278->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5832259562347923126.4164958317637630919. HINFO: read udp 10.244.0.3:44061->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5832259562347923126.4164958317637630919. HINFO: read udp 10.244.0.3:48213->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5832259562347923126.4164958317637630919. HINFO: read udp 10.244.0.3:45251->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5832259562347923126.4164958317637630919. HINFO: read udp 10.244.0.3:52074->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5832259562347923126.4164958317637630919. HINFO: read udp 10.244.0.3:47586->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-607000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-607000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05
	                    minikube.k8s.io/name=running-upgrade-607000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T05_35_37_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 12:35:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-607000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 12:39:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 12:35:37 +0000   Fri, 16 Aug 2024 12:35:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 12:35:37 +0000   Fri, 16 Aug 2024 12:35:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 12:35:37 +0000   Fri, 16 Aug 2024 12:35:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 12:35:37 +0000   Fri, 16 Aug 2024 12:35:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-607000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 3ee4b5609f3447d8a42cf9b0f9be1e67
	  System UUID:                3ee4b5609f3447d8a42cf9b0f9be1e67
	  Boot ID:                    b7027640-6048-4652-b5f5-df3c5deb2f18
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-cbl4h                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 coredns-6d4b75cb6d-q75hh                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 etcd-running-upgrade-607000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m18s
	  kube-system                 kube-apiserver-running-upgrade-607000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-controller-manager-running-upgrade-607000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-proxy-5dvz5                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-607000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m2s                   kube-proxy       
	  Normal  Starting                 4m23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-607000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s (x3 over 4m23s)  kubelet          Node running-upgrade-607000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s (x3 over 4m23s)  kubelet          Node running-upgrade-607000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m17s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s                  kubelet          Node running-upgrade-607000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s                  kubelet          Node running-upgrade-607000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s                  kubelet          Node running-upgrade-607000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m17s                  kubelet          Node running-upgrade-607000 status is now: NodeReady
	  Normal  RegisteredNode           4m5s                   node-controller  Node running-upgrade-607000 event: Registered Node running-upgrade-607000 in Controller
	
	
	==> dmesg <==
	[  +1.706305] systemd-fstab-generator[875]: Ignoring "noauto" for root device
	[  +0.063777] systemd-fstab-generator[886]: Ignoring "noauto" for root device
	[  +0.065045] systemd-fstab-generator[897]: Ignoring "noauto" for root device
	[  +1.149605] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.065537] systemd-fstab-generator[1047]: Ignoring "noauto" for root device
	[  +0.063839] systemd-fstab-generator[1058]: Ignoring "noauto" for root device
	[  +2.348718] systemd-fstab-generator[1289]: Ignoring "noauto" for root device
	[Aug16 12:31] systemd-fstab-generator[1839]: Ignoring "noauto" for root device
	[  +2.783419] systemd-fstab-generator[2201]: Ignoring "noauto" for root device
	[  +0.129121] systemd-fstab-generator[2235]: Ignoring "noauto" for root device
	[  +0.076557] systemd-fstab-generator[2246]: Ignoring "noauto" for root device
	[  +0.076983] systemd-fstab-generator[2259]: Ignoring "noauto" for root device
	[ +12.505249] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.210269] systemd-fstab-generator[3017]: Ignoring "noauto" for root device
	[  +0.068489] systemd-fstab-generator[3030]: Ignoring "noauto" for root device
	[  +0.060135] systemd-fstab-generator[3041]: Ignoring "noauto" for root device
	[  +0.076327] systemd-fstab-generator[3055]: Ignoring "noauto" for root device
	[  +2.368518] systemd-fstab-generator[3209]: Ignoring "noauto" for root device
	[  +3.293985] systemd-fstab-generator[3586]: Ignoring "noauto" for root device
	[  +1.927132] systemd-fstab-generator[4009]: Ignoring "noauto" for root device
	[ +19.461426] kauditd_printk_skb: 68 callbacks suppressed
	[Aug16 12:35] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.557208] systemd-fstab-generator[12254]: Ignoring "noauto" for root device
	[  +6.135888] systemd-fstab-generator[12876]: Ignoring "noauto" for root device
	[  +0.463389] systemd-fstab-generator[13009]: Ignoring "noauto" for root device
	
	
	==> etcd [0f8987cebd88] <==
	{"level":"info","ts":"2024-08-16T12:35:32.372Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-08-16T12:35:32.372Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-08-16T12:35:32.382Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-16T12:35:32.382Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-16T12:35:32.382Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-16T12:35:32.382Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-16T12:35:32.382Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-16T12:35:33.356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-16T12:35:33.356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-16T12:35:33.356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-08-16T12:35:33.356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-08-16T12:35:33.356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-16T12:35:33.357Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-08-16T12:35:33.357Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-16T12:35:33.357Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T12:35:33.357Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T12:35:33.357Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T12:35:33.357Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T12:35:33.357Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-607000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-16T12:35:33.357Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T12:35:33.357Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T12:35:33.358Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-08-16T12:35:33.358Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-16T12:35:33.358Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-16T12:35:33.358Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 12:39:54 up 9 min,  0 users,  load average: 0.29, 0.38, 0.21
	Linux running-upgrade-607000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [7e7027a018f3] <==
	I0816 12:35:34.540711       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0816 12:35:34.552196       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0816 12:35:34.559505       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0816 12:35:34.559506       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0816 12:35:34.559652       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0816 12:35:34.559669       1 cache.go:39] Caches are synced for autoregister controller
	I0816 12:35:34.565999       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0816 12:35:35.290443       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0816 12:35:35.462564       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0816 12:35:35.465200       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0816 12:35:35.465427       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0816 12:35:35.587770       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0816 12:35:35.600229       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0816 12:35:35.625649       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0816 12:35:35.627628       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0816 12:35:35.628033       1 controller.go:611] quota admission added evaluator for: endpoints
	I0816 12:35:35.629256       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0816 12:35:36.586412       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0816 12:35:37.269724       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0816 12:35:37.277340       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0816 12:35:37.288570       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0816 12:35:37.342166       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0816 12:35:49.495256       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0816 12:35:50.391666       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0816 12:35:51.778076       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [8af46eabd188] <==
	I0816 12:35:49.546443       1 shared_informer.go:262] Caches are synced for node
	I0816 12:35:49.546443       1 shared_informer.go:262] Caches are synced for resource quota
	I0816 12:35:49.546454       1 range_allocator.go:173] Starting range CIDR allocator
	I0816 12:35:49.546456       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0816 12:35:49.546467       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0816 12:35:49.549052       1 range_allocator.go:374] Set node running-upgrade-607000 PodCIDR to [10.244.0.0/24]
	I0816 12:35:49.558568       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0816 12:35:49.563056       1 shared_informer.go:262] Caches are synced for stateful set
	I0816 12:35:49.590390       1 shared_informer.go:262] Caches are synced for taint
	I0816 12:35:49.590421       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0816 12:35:49.590543       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0816 12:35:49.590569       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-607000. Assuming now as a timestamp.
	I0816 12:35:49.590592       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0816 12:35:49.590611       1 event.go:294] "Event occurred" object="running-upgrade-607000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-607000 event: Registered Node running-upgrade-607000 in Controller"
	I0816 12:35:49.606830       1 shared_informer.go:262] Caches are synced for daemon sets
	I0816 12:35:49.611205       1 shared_informer.go:262] Caches are synced for GC
	I0816 12:35:49.640746       1 shared_informer.go:262] Caches are synced for TTL
	I0816 12:35:49.641423       1 shared_informer.go:262] Caches are synced for persistent volume
	I0816 12:35:49.656757       1 shared_informer.go:262] Caches are synced for attach detach
	I0816 12:35:50.066547       1 shared_informer.go:262] Caches are synced for garbage collector
	I0816 12:35:50.140004       1 shared_informer.go:262] Caches are synced for garbage collector
	I0816 12:35:50.140075       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0816 12:35:50.297044       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-cbl4h"
	I0816 12:35:50.300741       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-q75hh"
	I0816 12:35:50.394287       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-5dvz5"
	
	
	==> kube-proxy [9d07cdf1cffb] <==
	I0816 12:35:51.763005       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0816 12:35:51.763028       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0816 12:35:51.763037       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0816 12:35:51.775750       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0816 12:35:51.775761       1 server_others.go:206] "Using iptables Proxier"
	I0816 12:35:51.775772       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0816 12:35:51.776168       1 server.go:661] "Version info" version="v1.24.1"
	I0816 12:35:51.776178       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 12:35:51.776403       1 config.go:317] "Starting service config controller"
	I0816 12:35:51.776416       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0816 12:35:51.776424       1 config.go:226] "Starting endpoint slice config controller"
	I0816 12:35:51.776454       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0816 12:35:51.776711       1 config.go:444] "Starting node config controller"
	I0816 12:35:51.776738       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0816 12:35:51.877574       1 shared_informer.go:262] Caches are synced for service config
	I0816 12:35:51.877628       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0816 12:35:51.877574       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [927f9bdc4d05] <==
	W0816 12:35:34.528165       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 12:35:34.528186       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0816 12:35:34.528228       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 12:35:34.528248       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0816 12:35:34.528285       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 12:35:34.528317       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0816 12:35:34.528348       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 12:35:34.528365       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0816 12:35:34.528413       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 12:35:34.528435       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0816 12:35:34.528505       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 12:35:34.528530       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0816 12:35:34.528548       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 12:35:34.528563       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0816 12:35:35.348009       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 12:35:35.348038       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0816 12:35:35.396349       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 12:35:35.396362       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0816 12:35:35.396417       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 12:35:35.396427       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0816 12:35:35.423236       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 12:35:35.423320       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0816 12:35:35.541832       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 12:35:35.541926       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0816 12:35:36.124567       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Fri 2024-08-16 12:30:38 UTC, ends at Fri 2024-08-16 12:39:54 UTC. --
	Aug 16 12:35:39 running-upgrade-607000 kubelet[12882]: E0816 12:35:39.109567   12882 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-607000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-607000"
	Aug 16 12:35:39 running-upgrade-607000 kubelet[12882]: E0816 12:35:39.309245   12882 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-607000\" already exists" pod="kube-system/etcd-running-upgrade-607000"
	Aug 16 12:35:39 running-upgrade-607000 kubelet[12882]: I0816 12:35:39.508318   12882 request.go:601] Waited for 1.105055173s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Aug 16 12:35:39 running-upgrade-607000 kubelet[12882]: E0816 12:35:39.511543   12882 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-607000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-607000"
	Aug 16 12:35:49 running-upgrade-607000 kubelet[12882]: I0816 12:35:49.596023   12882 topology_manager.go:200] "Topology Admit Handler"
	Aug 16 12:35:49 running-upgrade-607000 kubelet[12882]: I0816 12:35:49.637059   12882 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 16 12:35:49 running-upgrade-607000 kubelet[12882]: I0816 12:35:49.637374   12882 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rct2l\" (UniqueName: \"kubernetes.io/projected/18f32746-bff7-4d6f-8431-ec45c9221cb6-kube-api-access-rct2l\") pod \"storage-provisioner\" (UID: \"18f32746-bff7-4d6f-8431-ec45c9221cb6\") " pod="kube-system/storage-provisioner"
	Aug 16 12:35:49 running-upgrade-607000 kubelet[12882]: I0816 12:35:49.637380   12882 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 16 12:35:49 running-upgrade-607000 kubelet[12882]: I0816 12:35:49.637401   12882 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/18f32746-bff7-4d6f-8431-ec45c9221cb6-tmp\") pod \"storage-provisioner\" (UID: \"18f32746-bff7-4d6f-8431-ec45c9221cb6\") " pod="kube-system/storage-provisioner"
	Aug 16 12:35:49 running-upgrade-607000 kubelet[12882]: E0816 12:35:49.742464   12882 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Aug 16 12:35:49 running-upgrade-607000 kubelet[12882]: E0816 12:35:49.742488   12882 projected.go:192] Error preparing data for projected volume kube-api-access-rct2l for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Aug 16 12:35:49 running-upgrade-607000 kubelet[12882]: E0816 12:35:49.742530   12882 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/18f32746-bff7-4d6f-8431-ec45c9221cb6-kube-api-access-rct2l podName:18f32746-bff7-4d6f-8431-ec45c9221cb6 nodeName:}" failed. No retries permitted until 2024-08-16 12:35:50.242514647 +0000 UTC m=+12.981623169 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rct2l" (UniqueName: "kubernetes.io/projected/18f32746-bff7-4d6f-8431-ec45c9221cb6-kube-api-access-rct2l") pod "storage-provisioner" (UID: "18f32746-bff7-4d6f-8431-ec45c9221cb6") : configmap "kube-root-ca.crt" not found
	Aug 16 12:35:50 running-upgrade-607000 kubelet[12882]: I0816 12:35:50.300123   12882 topology_manager.go:200] "Topology Admit Handler"
	Aug 16 12:35:50 running-upgrade-607000 kubelet[12882]: I0816 12:35:50.304890   12882 topology_manager.go:200] "Topology Admit Handler"
	Aug 16 12:35:50 running-upgrade-607000 kubelet[12882]: I0816 12:35:50.397369   12882 topology_manager.go:200] "Topology Admit Handler"
	Aug 16 12:35:50 running-upgrade-607000 kubelet[12882]: I0816 12:35:50.445759   12882 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc0f455d-617e-49e5-9fff-74ba22694e55-config-volume\") pod \"coredns-6d4b75cb6d-cbl4h\" (UID: \"dc0f455d-617e-49e5-9fff-74ba22694e55\") " pod="kube-system/coredns-6d4b75cb6d-cbl4h"
	Aug 16 12:35:50 running-upgrade-607000 kubelet[12882]: I0816 12:35:50.445936   12882 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/95c1496e-d4a0-4e2f-87bd-a59d8900e39f-kube-proxy\") pod \"kube-proxy-5dvz5\" (UID: \"95c1496e-d4a0-4e2f-87bd-a59d8900e39f\") " pod="kube-system/kube-proxy-5dvz5"
	Aug 16 12:35:50 running-upgrade-607000 kubelet[12882]: I0816 12:35:50.445954   12882 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhcjg\" (UniqueName: \"kubernetes.io/projected/5fe6d2eb-8d13-4b38-9fc1-7522ba4f7aa8-kube-api-access-nhcjg\") pod \"coredns-6d4b75cb6d-q75hh\" (UID: \"5fe6d2eb-8d13-4b38-9fc1-7522ba4f7aa8\") " pod="kube-system/coredns-6d4b75cb6d-q75hh"
	Aug 16 12:35:50 running-upgrade-607000 kubelet[12882]: I0816 12:35:50.445965   12882 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/95c1496e-d4a0-4e2f-87bd-a59d8900e39f-xtables-lock\") pod \"kube-proxy-5dvz5\" (UID: \"95c1496e-d4a0-4e2f-87bd-a59d8900e39f\") " pod="kube-system/kube-proxy-5dvz5"
	Aug 16 12:35:50 running-upgrade-607000 kubelet[12882]: I0816 12:35:50.445975   12882 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/95c1496e-d4a0-4e2f-87bd-a59d8900e39f-lib-modules\") pod \"kube-proxy-5dvz5\" (UID: \"95c1496e-d4a0-4e2f-87bd-a59d8900e39f\") " pod="kube-system/kube-proxy-5dvz5"
	Aug 16 12:35:50 running-upgrade-607000 kubelet[12882]: I0816 12:35:50.445991   12882 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5fe6d2eb-8d13-4b38-9fc1-7522ba4f7aa8-config-volume\") pod \"coredns-6d4b75cb6d-q75hh\" (UID: \"5fe6d2eb-8d13-4b38-9fc1-7522ba4f7aa8\") " pod="kube-system/coredns-6d4b75cb6d-q75hh"
	Aug 16 12:35:50 running-upgrade-607000 kubelet[12882]: I0816 12:35:50.446001   12882 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqzdn\" (UniqueName: \"kubernetes.io/projected/dc0f455d-617e-49e5-9fff-74ba22694e55-kube-api-access-mqzdn\") pod \"coredns-6d4b75cb6d-cbl4h\" (UID: \"dc0f455d-617e-49e5-9fff-74ba22694e55\") " pod="kube-system/coredns-6d4b75cb6d-cbl4h"
	Aug 16 12:35:50 running-upgrade-607000 kubelet[12882]: I0816 12:35:50.546986   12882 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zptk\" (UniqueName: \"kubernetes.io/projected/95c1496e-d4a0-4e2f-87bd-a59d8900e39f-kube-api-access-2zptk\") pod \"kube-proxy-5dvz5\" (UID: \"95c1496e-d4a0-4e2f-87bd-a59d8900e39f\") " pod="kube-system/kube-proxy-5dvz5"
	Aug 16 12:39:38 running-upgrade-607000 kubelet[12882]: I0816 12:39:38.784650   12882 scope.go:110] "RemoveContainer" containerID="fbb13a6d2faf621bb64f20d4c236e0b8400a6d6857ba9300eb437ef612ac12a6"
	Aug 16 12:39:38 running-upgrade-607000 kubelet[12882]: I0816 12:39:38.810771   12882 scope.go:110] "RemoveContainer" containerID="e87bc196aca82d16751232b0ca788462e278d1c6febd887e1b3275a7da9c699e"
	
	
	==> storage-provisioner [af1a471fe36f] <==
	I0816 12:35:50.684378       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 12:35:50.688046       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 12:35:50.688107       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 12:35:50.691343       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 12:35:50.691399       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-607000_429037ac-96f8-4c3e-b6bd-d3f4b07da527!
	I0816 12:35:50.691451       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"54465126-427e-4f2e-a7ba-b961cd096d28", APIVersion:"v1", ResourceVersion:"369", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-607000_429037ac-96f8-4c3e-b6bd-d3f4b07da527 became leader
	I0816 12:35:50.797577       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-607000_429037ac-96f8-4c3e-b6bd-d3f4b07da527!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-607000 -n running-upgrade-607000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-607000 -n running-upgrade-607000: exit status 2 (15.66386725s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-607000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-607000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-607000
--- FAIL: TestRunningBinaryUpgrade (600.48s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.68s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-604000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-604000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.985626667s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-604000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-604000" primary control-plane node in "kubernetes-upgrade-604000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-604000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:33:10.438002    8801 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:33:10.438132    8801 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:33:10.438135    8801 out.go:358] Setting ErrFile to fd 2...
	I0816 05:33:10.438138    8801 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:33:10.438257    8801 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:33:10.439341    8801 out.go:352] Setting JSON to false
	I0816 05:33:10.455805    8801 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5559,"bootTime":1723806031,"procs":509,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0816 05:33:10.455871    8801 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:33:10.461770    8801 out.go:177] * [kubernetes-upgrade-604000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 05:33:10.465686    8801 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:33:10.465725    8801 notify.go:220] Checking for updates...
	I0816 05:33:10.472558    8801 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:33:10.475614    8801 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 05:33:10.479452    8801 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:33:10.483594    8801 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	I0816 05:33:10.486606    8801 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 05:33:10.488232    8801 config.go:182] Loaded profile config "multinode-569000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:33:10.488305    8801 config.go:182] Loaded profile config "running-upgrade-607000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0816 05:33:10.488353    8801 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:33:10.491598    8801 out.go:177] * Using the qemu2 driver based on user configuration
	I0816 05:33:10.497585    8801 start.go:297] selected driver: qemu2
	I0816 05:33:10.497592    8801 start.go:901] validating driver "qemu2" against <nil>
	I0816 05:33:10.497598    8801 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:33:10.499909    8801 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 05:33:10.502593    8801 out.go:177] * Automatically selected the socket_vmnet network
	I0816 05:33:10.508720    8801 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0816 05:33:10.508752    8801 cni.go:84] Creating CNI manager for ""
	I0816 05:33:10.508759    8801 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0816 05:33:10.508785    8801 start.go:340] cluster config:
	{Name:kubernetes-upgrade-604000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-604000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:33:10.512506    8801 iso.go:125] acquiring lock: {Name:mkee7fdae783c25a15c40888f5bdc01a171155d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:33:10.516604    8801 out.go:177] * Starting "kubernetes-upgrade-604000" primary control-plane node in "kubernetes-upgrade-604000" cluster
	I0816 05:33:10.524619    8801 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0816 05:33:10.524642    8801 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0816 05:33:10.524651    8801 cache.go:56] Caching tarball of preloaded images
	I0816 05:33:10.524712    8801 preload.go:172] Found /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 05:33:10.524718    8801 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0816 05:33:10.524779    8801 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/kubernetes-upgrade-604000/config.json ...
	I0816 05:33:10.524790    8801 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/kubernetes-upgrade-604000/config.json: {Name:mk3d7000088898e1a6a5f00ce18479ecd64cde51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:33:10.525127    8801 start.go:360] acquireMachinesLock for kubernetes-upgrade-604000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:33:10.525162    8801 start.go:364] duration metric: took 28.834µs to acquireMachinesLock for "kubernetes-upgrade-604000"
	I0816 05:33:10.525178    8801 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-604000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-604000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:33:10.525209    8801 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:33:10.529650    8801 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 05:33:10.556102    8801 start.go:159] libmachine.API.Create for "kubernetes-upgrade-604000" (driver="qemu2")
	I0816 05:33:10.556126    8801 client.go:168] LocalClient.Create starting
	I0816 05:33:10.556194    8801 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:33:10.556226    8801 main.go:141] libmachine: Decoding PEM data...
	I0816 05:33:10.556235    8801 main.go:141] libmachine: Parsing certificate...
	I0816 05:33:10.556275    8801 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:33:10.556297    8801 main.go:141] libmachine: Decoding PEM data...
	I0816 05:33:10.556305    8801 main.go:141] libmachine: Parsing certificate...
	I0816 05:33:10.556713    8801 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:33:10.712608    8801 main.go:141] libmachine: Creating SSH key...
	I0816 05:33:10.913919    8801 main.go:141] libmachine: Creating Disk image...
	I0816 05:33:10.913928    8801 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:33:10.914162    8801 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kubernetes-upgrade-604000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kubernetes-upgrade-604000/disk.qcow2
	I0816 05:33:10.924329    8801 main.go:141] libmachine: STDOUT: 
	I0816 05:33:10.924359    8801 main.go:141] libmachine: STDERR: 
	I0816 05:33:10.924414    8801 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kubernetes-upgrade-604000/disk.qcow2 +20000M
	I0816 05:33:10.932651    8801 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:33:10.932666    8801 main.go:141] libmachine: STDERR: 
	I0816 05:33:10.932683    8801 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kubernetes-upgrade-604000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kubernetes-upgrade-604000/disk.qcow2
	I0816 05:33:10.932688    8801 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:33:10.932702    8801 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:33:10.932727    8801 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kubernetes-upgrade-604000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kubernetes-upgrade-604000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kubernetes-upgrade-604000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:8f:52:96:22:1f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kubernetes-upgrade-604000/disk.qcow2
	I0816 05:33:10.934316    8801 main.go:141] libmachine: STDOUT: 
	I0816 05:33:10.934333    8801 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:33:10.934353    8801 client.go:171] duration metric: took 378.226125ms to LocalClient.Create
	I0816 05:33:12.936508    8801 start.go:128] duration metric: took 2.411307958s to createHost
	I0816 05:33:12.936606    8801 start.go:83] releasing machines lock for "kubernetes-upgrade-604000", held for 2.411476167s
	W0816 05:33:12.936691    8801 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:33:12.950729    8801 out.go:177] * Deleting "kubernetes-upgrade-604000" in qemu2 ...
	W0816 05:33:12.980906    8801 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:33:12.981018    8801 start.go:729] Will try again in 5 seconds ...
	I0816 05:33:17.983218    8801 start.go:360] acquireMachinesLock for kubernetes-upgrade-604000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:33:17.983937    8801 start.go:364] duration metric: took 531.333µs to acquireMachinesLock for "kubernetes-upgrade-604000"
	I0816 05:33:17.984027    8801 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-604000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-604000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:33:17.984346    8801 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:33:17.992068    8801 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 05:33:18.041474    8801 start.go:159] libmachine.API.Create for "kubernetes-upgrade-604000" (driver="qemu2")
	I0816 05:33:18.041524    8801 client.go:168] LocalClient.Create starting
	I0816 05:33:18.041652    8801 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:33:18.041714    8801 main.go:141] libmachine: Decoding PEM data...
	I0816 05:33:18.041735    8801 main.go:141] libmachine: Parsing certificate...
	I0816 05:33:18.041812    8801 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:33:18.041858    8801 main.go:141] libmachine: Decoding PEM data...
	I0816 05:33:18.041869    8801 main.go:141] libmachine: Parsing certificate...
	I0816 05:33:18.042374    8801 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:33:18.198705    8801 main.go:141] libmachine: Creating SSH key...
	I0816 05:33:18.329520    8801 main.go:141] libmachine: Creating Disk image...
	I0816 05:33:18.329529    8801 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:33:18.329738    8801 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kubernetes-upgrade-604000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kubernetes-upgrade-604000/disk.qcow2
	I0816 05:33:18.338706    8801 main.go:141] libmachine: STDOUT: 
	I0816 05:33:18.338726    8801 main.go:141] libmachine: STDERR: 
	I0816 05:33:18.338769    8801 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kubernetes-upgrade-604000/disk.qcow2 +20000M
	I0816 05:33:18.346583    8801 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:33:18.346599    8801 main.go:141] libmachine: STDERR: 
	I0816 05:33:18.346609    8801 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kubernetes-upgrade-604000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kubernetes-upgrade-604000/disk.qcow2
	I0816 05:33:18.346616    8801 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:33:18.346628    8801 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:33:18.346677    8801 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kubernetes-upgrade-604000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kubernetes-upgrade-604000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kubernetes-upgrade-604000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:92:9f:17:37:d4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kubernetes-upgrade-604000/disk.qcow2
	I0816 05:33:18.348255    8801 main.go:141] libmachine: STDOUT: 
	I0816 05:33:18.348270    8801 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:33:18.348285    8801 client.go:171] duration metric: took 306.759208ms to LocalClient.Create
	I0816 05:33:20.350463    8801 start.go:128] duration metric: took 2.366114375s to createHost
	I0816 05:33:20.350556    8801 start.go:83] releasing machines lock for "kubernetes-upgrade-604000", held for 2.366630959s
	W0816 05:33:20.350899    8801 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-604000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-604000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:33:20.361532    8801 out.go:201] 
	W0816 05:33:20.368634    8801 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:33:20.368662    8801 out.go:270] * 
	* 
	W0816 05:33:20.371375    8801 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:33:20.381557    8801 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-604000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-604000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-604000: (3.2766165s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-604000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-604000 status --format={{.Host}}: exit status 7 (53.9885ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-604000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-604000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.189044708s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-604000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-604000" primary control-plane node in "kubernetes-upgrade-604000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-604000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-604000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:33:23.757872    8837 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:33:23.758009    8837 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:33:23.758013    8837 out.go:358] Setting ErrFile to fd 2...
	I0816 05:33:23.758015    8837 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:33:23.758146    8837 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:33:23.759106    8837 out.go:352] Setting JSON to false
	I0816 05:33:23.775212    8837 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5572,"bootTime":1723806031,"procs":503,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0816 05:33:23.775281    8837 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:33:23.779983    8837 out.go:177] * [kubernetes-upgrade-604000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 05:33:23.786952    8837 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:33:23.786986    8837 notify.go:220] Checking for updates...
	I0816 05:33:23.794924    8837 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:33:23.798970    8837 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 05:33:23.802801    8837 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:33:23.805940    8837 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	I0816 05:33:23.808967    8837 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 05:33:23.812222    8837 config.go:182] Loaded profile config "kubernetes-upgrade-604000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0816 05:33:23.812493    8837 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:33:23.816896    8837 out.go:177] * Using the qemu2 driver based on existing profile
	I0816 05:33:23.823945    8837 start.go:297] selected driver: qemu2
	I0816 05:33:23.823951    8837 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-604000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-604000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:33:23.823999    8837 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:33:23.826459    8837 cni.go:84] Creating CNI manager for ""
	I0816 05:33:23.826478    8837 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 05:33:23.826512    8837 start.go:340] cluster config:
	{Name:kubernetes-upgrade-604000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-604000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:33:23.830065    8837 iso.go:125] acquiring lock: {Name:mkee7fdae783c25a15c40888f5bdc01a171155d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:33:23.838949    8837 out.go:177] * Starting "kubernetes-upgrade-604000" primary control-plane node in "kubernetes-upgrade-604000" cluster
	I0816 05:33:23.842925    8837 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:33:23.842941    8837 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 05:33:23.842948    8837 cache.go:56] Caching tarball of preloaded images
	I0816 05:33:23.843005    8837 preload.go:172] Found /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 05:33:23.843011    8837 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 05:33:23.843069    8837 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/kubernetes-upgrade-604000/config.json ...
	I0816 05:33:23.843510    8837 start.go:360] acquireMachinesLock for kubernetes-upgrade-604000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:33:23.843543    8837 start.go:364] duration metric: took 24µs to acquireMachinesLock for "kubernetes-upgrade-604000"
	I0816 05:33:23.843553    8837 start.go:96] Skipping create...Using existing machine configuration
	I0816 05:33:23.843558    8837 fix.go:54] fixHost starting: 
	I0816 05:33:23.843695    8837 fix.go:112] recreateIfNeeded on kubernetes-upgrade-604000: state=Stopped err=<nil>
	W0816 05:33:23.843704    8837 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 05:33:23.851918    8837 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-604000" ...
	I0816 05:33:23.855955    8837 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:33:23.855990    8837 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kubernetes-upgrade-604000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kubernetes-upgrade-604000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kubernetes-upgrade-604000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:92:9f:17:37:d4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kubernetes-upgrade-604000/disk.qcow2
	I0816 05:33:23.858084    8837 main.go:141] libmachine: STDOUT: 
	I0816 05:33:23.858104    8837 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:33:23.858133    8837 fix.go:56] duration metric: took 14.575667ms for fixHost
	I0816 05:33:23.858137    8837 start.go:83] releasing machines lock for "kubernetes-upgrade-604000", held for 14.590459ms
	W0816 05:33:23.858145    8837 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:33:23.858185    8837 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:33:23.858190    8837 start.go:729] Will try again in 5 seconds ...
	I0816 05:33:28.860295    8837 start.go:360] acquireMachinesLock for kubernetes-upgrade-604000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:33:28.860824    8837 start.go:364] duration metric: took 413.208µs to acquireMachinesLock for "kubernetes-upgrade-604000"
	I0816 05:33:28.860908    8837 start.go:96] Skipping create...Using existing machine configuration
	I0816 05:33:28.860923    8837 fix.go:54] fixHost starting: 
	I0816 05:33:28.861554    8837 fix.go:112] recreateIfNeeded on kubernetes-upgrade-604000: state=Stopped err=<nil>
	W0816 05:33:28.861576    8837 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 05:33:28.869931    8837 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-604000" ...
	I0816 05:33:28.874968    8837 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:33:28.875165    8837 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kubernetes-upgrade-604000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kubernetes-upgrade-604000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kubernetes-upgrade-604000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:92:9f:17:37:d4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kubernetes-upgrade-604000/disk.qcow2
	I0816 05:33:28.883679    8837 main.go:141] libmachine: STDOUT: 
	I0816 05:33:28.883746    8837 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:33:28.883862    8837 fix.go:56] duration metric: took 22.938125ms for fixHost
	I0816 05:33:28.883879    8837 start.go:83] releasing machines lock for "kubernetes-upgrade-604000", held for 23.036958ms
	W0816 05:33:28.884112    8837 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-604000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-604000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:33:28.890886    8837 out.go:201] 
	W0816 05:33:28.894049    8837 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:33:28.894071    8837 out.go:270] * 
	* 
	W0816 05:33:28.895360    8837 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:33:28.905900    8837 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-604000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-604000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-604000 version --output=json: exit status 1 (56.935333ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-604000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-08-16 05:33:28.976156 -0700 PDT m=+846.465843710
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-604000 -n kubernetes-upgrade-604000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-604000 -n kubernetes-upgrade-604000: exit status 7 (33.242042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-604000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-604000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-604000
--- FAIL: TestKubernetesUpgrade (18.68s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.26s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19423
- KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current16446810/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.26s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (0.95s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19423
- KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1287765685/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (0.95s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (574.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3403133453 start -p stopped-upgrade-972000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3403133453 start -p stopped-upgrade-972000 --memory=2200 --vm-driver=qemu2 : (40.820792042s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3403133453 -p stopped-upgrade-972000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3403133453 -p stopped-upgrade-972000 stop: (12.108525834s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-972000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-972000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.557751334s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-972000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-972000" primary control-plane node in "stopped-upgrade-972000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-972000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:34:23.166240    8876 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:34:23.166411    8876 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:34:23.166415    8876 out.go:358] Setting ErrFile to fd 2...
	I0816 05:34:23.166418    8876 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:34:23.166557    8876 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:34:23.167778    8876 out.go:352] Setting JSON to false
	I0816 05:34:23.187568    8876 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5632,"bootTime":1723806031,"procs":504,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0816 05:34:23.187637    8876 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:34:23.192794    8876 out.go:177] * [stopped-upgrade-972000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 05:34:23.199787    8876 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:34:23.199861    8876 notify.go:220] Checking for updates...
	I0816 05:34:23.207747    8876 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:34:23.210776    8876 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 05:34:23.213830    8876 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:34:23.216727    8876 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	I0816 05:34:23.219725    8876 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 05:34:23.223145    8876 config.go:182] Loaded profile config "stopped-upgrade-972000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0816 05:34:23.224753    8876 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0816 05:34:23.227765    8876 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:34:23.230758    8876 out.go:177] * Using the qemu2 driver based on existing profile
	I0816 05:34:23.238917    8876 start.go:297] selected driver: qemu2
	I0816 05:34:23.238924    8876 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51397 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0816 05:34:23.238974    8876 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:34:23.241509    8876 cni.go:84] Creating CNI manager for ""
	I0816 05:34:23.241526    8876 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 05:34:23.241561    8876 start.go:340] cluster config:
	{Name:stopped-upgrade-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51397 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0816 05:34:23.241622    8876 iso.go:125] acquiring lock: {Name:mkee7fdae783c25a15c40888f5bdc01a171155d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:34:23.250739    8876 out.go:177] * Starting "stopped-upgrade-972000" primary control-plane node in "stopped-upgrade-972000" cluster
	I0816 05:34:23.254751    8876 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0816 05:34:23.254769    8876 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0816 05:34:23.254774    8876 cache.go:56] Caching tarball of preloaded images
	I0816 05:34:23.254831    8876 preload.go:172] Found /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 05:34:23.254836    8876 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0816 05:34:23.254890    8876 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000/config.json ...
	I0816 05:34:23.255353    8876 start.go:360] acquireMachinesLock for stopped-upgrade-972000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:34:23.255392    8876 start.go:364] duration metric: took 30.458µs to acquireMachinesLock for "stopped-upgrade-972000"
	I0816 05:34:23.255402    8876 start.go:96] Skipping create...Using existing machine configuration
	I0816 05:34:23.255407    8876 fix.go:54] fixHost starting: 
	I0816 05:34:23.255524    8876 fix.go:112] recreateIfNeeded on stopped-upgrade-972000: state=Stopped err=<nil>
	W0816 05:34:23.255533    8876 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 05:34:23.262749    8876 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-972000" ...
	I0816 05:34:23.266761    8876 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:34:23.266825    8876 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/stopped-upgrade-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/stopped-upgrade-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/stopped-upgrade-972000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51362-:22,hostfwd=tcp::51363-:2376,hostname=stopped-upgrade-972000 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/stopped-upgrade-972000/disk.qcow2
	I0816 05:34:23.313759    8876 main.go:141] libmachine: STDOUT: 
	I0816 05:34:23.313791    8876 main.go:141] libmachine: STDERR: 
	I0816 05:34:23.313796    8876 main.go:141] libmachine: Waiting for VM to start (ssh -p 51362 docker@127.0.0.1)...
	I0816 05:34:43.157488    8876 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000/config.json ...
	I0816 05:34:43.157764    8876 machine.go:93] provisionDockerMachine start ...
	I0816 05:34:43.157880    8876 main.go:141] libmachine: Using SSH client type: native
	I0816 05:34:43.158064    8876 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10089c5a0] 0x10089ee00 <nil>  [] 0s} localhost 51362 <nil> <nil>}
	I0816 05:34:43.158070    8876 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 05:34:43.221670    8876 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 05:34:43.221688    8876 buildroot.go:166] provisioning hostname "stopped-upgrade-972000"
	I0816 05:34:43.221752    8876 main.go:141] libmachine: Using SSH client type: native
	I0816 05:34:43.221892    8876 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10089c5a0] 0x10089ee00 <nil>  [] 0s} localhost 51362 <nil> <nil>}
	I0816 05:34:43.221900    8876 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-972000 && echo "stopped-upgrade-972000" | sudo tee /etc/hostname
	I0816 05:34:43.288657    8876 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-972000
	
	I0816 05:34:43.288718    8876 main.go:141] libmachine: Using SSH client type: native
	I0816 05:34:43.288870    8876 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10089c5a0] 0x10089ee00 <nil>  [] 0s} localhost 51362 <nil> <nil>}
	I0816 05:34:43.288883    8876 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-972000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-972000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-972000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 05:34:43.353581    8876 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 05:34:43.353592    8876 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-6249/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-6249/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-6249/.minikube}
	I0816 05:34:43.353606    8876 buildroot.go:174] setting up certificates
	I0816 05:34:43.353611    8876 provision.go:84] configureAuth start
	I0816 05:34:43.353621    8876 provision.go:143] copyHostCerts
	I0816 05:34:43.353698    8876 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-6249/.minikube/ca.pem, removing ...
	I0816 05:34:43.353705    8876 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-6249/.minikube/ca.pem
	I0816 05:34:43.353942    8876 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-6249/.minikube/ca.pem (1082 bytes)
	I0816 05:34:43.354151    8876 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-6249/.minikube/cert.pem, removing ...
	I0816 05:34:43.354155    8876 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-6249/.minikube/cert.pem
	I0816 05:34:43.354218    8876 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-6249/.minikube/cert.pem (1123 bytes)
	I0816 05:34:43.354347    8876 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-6249/.minikube/key.pem, removing ...
	I0816 05:34:43.354351    8876 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-6249/.minikube/key.pem
	I0816 05:34:43.354406    8876 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-6249/.minikube/key.pem (1679 bytes)
	I0816 05:34:43.354504    8876 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-972000 san=[127.0.0.1 localhost minikube stopped-upgrade-972000]
	I0816 05:34:43.450834    8876 provision.go:177] copyRemoteCerts
	I0816 05:34:43.450866    8876 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 05:34:43.450875    8876 sshutil.go:53] new ssh client: &{IP:localhost Port:51362 SSHKeyPath:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/stopped-upgrade-972000/id_rsa Username:docker}
	I0816 05:34:43.485452    8876 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 05:34:43.492245    8876 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0816 05:34:43.498956    8876 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 05:34:43.505983    8876 provision.go:87] duration metric: took 152.363208ms to configureAuth
	I0816 05:34:43.505995    8876 buildroot.go:189] setting minikube options for container-runtime
	I0816 05:34:43.506108    8876 config.go:182] Loaded profile config "stopped-upgrade-972000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0816 05:34:43.506143    8876 main.go:141] libmachine: Using SSH client type: native
	I0816 05:34:43.506228    8876 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10089c5a0] 0x10089ee00 <nil>  [] 0s} localhost 51362 <nil> <nil>}
	I0816 05:34:43.506235    8876 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 05:34:43.566747    8876 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 05:34:43.566757    8876 buildroot.go:70] root file system type: tmpfs
	I0816 05:34:43.566809    8876 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 05:34:43.566857    8876 main.go:141] libmachine: Using SSH client type: native
	I0816 05:34:43.566967    8876 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10089c5a0] 0x10089ee00 <nil>  [] 0s} localhost 51362 <nil> <nil>}
	I0816 05:34:43.567003    8876 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 05:34:43.633449    8876 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 05:34:43.633500    8876 main.go:141] libmachine: Using SSH client type: native
	I0816 05:34:43.633609    8876 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10089c5a0] 0x10089ee00 <nil>  [] 0s} localhost 51362 <nil> <nil>}
	I0816 05:34:43.633620    8876 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 05:34:43.997513    8876 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 05:34:43.997525    8876 machine.go:96] duration metric: took 839.768208ms to provisionDockerMachine
	I0816 05:34:43.997536    8876 start.go:293] postStartSetup for "stopped-upgrade-972000" (driver="qemu2")
	I0816 05:34:43.997542    8876 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 05:34:43.997614    8876 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 05:34:43.997624    8876 sshutil.go:53] new ssh client: &{IP:localhost Port:51362 SSHKeyPath:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/stopped-upgrade-972000/id_rsa Username:docker}
	I0816 05:34:44.029522    8876 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 05:34:44.030859    8876 info.go:137] Remote host: Buildroot 2021.02.12
	I0816 05:34:44.030866    8876 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-6249/.minikube/addons for local assets ...
	I0816 05:34:44.030951    8876 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-6249/.minikube/files for local assets ...
	I0816 05:34:44.031069    8876 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-6249/.minikube/files/etc/ssl/certs/67462.pem -> 67462.pem in /etc/ssl/certs
	I0816 05:34:44.031196    8876 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 05:34:44.034051    8876 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/files/etc/ssl/certs/67462.pem --> /etc/ssl/certs/67462.pem (1708 bytes)
	I0816 05:34:44.041303    8876 start.go:296] duration metric: took 43.761333ms for postStartSetup
	I0816 05:34:44.041318    8876 fix.go:56] duration metric: took 20.786254541s for fixHost
	I0816 05:34:44.041353    8876 main.go:141] libmachine: Using SSH client type: native
	I0816 05:34:44.041455    8876 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10089c5a0] 0x10089ee00 <nil>  [] 0s} localhost 51362 <nil> <nil>}
	I0816 05:34:44.041460    8876 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 05:34:44.101151    8876 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723811683.930611296
	
	I0816 05:34:44.101160    8876 fix.go:216] guest clock: 1723811683.930611296
	I0816 05:34:44.101164    8876 fix.go:229] Guest: 2024-08-16 05:34:43.930611296 -0700 PDT Remote: 2024-08-16 05:34:44.041319 -0700 PDT m=+20.904770793 (delta=-110.707704ms)
	I0816 05:34:44.101175    8876 fix.go:200] guest clock delta is within tolerance: -110.707704ms
	I0816 05:34:44.101182    8876 start.go:83] releasing machines lock for "stopped-upgrade-972000", held for 20.846125166s
	I0816 05:34:44.101251    8876 ssh_runner.go:195] Run: cat /version.json
	I0816 05:34:44.101262    8876 sshutil.go:53] new ssh client: &{IP:localhost Port:51362 SSHKeyPath:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/stopped-upgrade-972000/id_rsa Username:docker}
	I0816 05:34:44.101251    8876 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 05:34:44.101305    8876 sshutil.go:53] new ssh client: &{IP:localhost Port:51362 SSHKeyPath:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/stopped-upgrade-972000/id_rsa Username:docker}
	W0816 05:34:44.102196    8876 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:51483->127.0.0.1:51362: write: broken pipe
	I0816 05:34:44.102213    8876 retry.go:31] will retry after 368.049268ms: ssh: handshake failed: write tcp 127.0.0.1:51483->127.0.0.1:51362: write: broken pipe
	W0816 05:34:44.131802    8876 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0816 05:34:44.131863    8876 ssh_runner.go:195] Run: systemctl --version
	I0816 05:34:44.133712    8876 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 05:34:44.135160    8876 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 05:34:44.135190    8876 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0816 05:34:44.138031    8876 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0816 05:34:44.142923    8876 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 05:34:44.142938    8876 start.go:495] detecting cgroup driver to use...
	I0816 05:34:44.143031    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 05:34:44.150141    8876 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0816 05:34:44.153730    8876 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 05:34:44.157019    8876 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 05:34:44.157051    8876 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 05:34:44.159953    8876 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 05:34:44.162988    8876 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 05:34:44.166074    8876 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 05:34:44.169086    8876 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 05:34:44.171780    8876 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 05:34:44.174735    8876 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 05:34:44.178142    8876 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 05:34:44.181563    8876 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 05:34:44.184530    8876 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 05:34:44.187176    8876 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:34:44.267858    8876 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 05:34:44.278443    8876 start.go:495] detecting cgroup driver to use...
	I0816 05:34:44.278502    8876 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 05:34:44.283686    8876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 05:34:44.289144    8876 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 05:34:44.297432    8876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 05:34:44.301980    8876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 05:34:44.306708    8876 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 05:34:44.362389    8876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 05:34:44.367636    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 05:34:44.372764    8876 ssh_runner.go:195] Run: which cri-dockerd
	I0816 05:34:44.374104    8876 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 05:34:44.376915    8876 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0816 05:34:44.382139    8876 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 05:34:44.470516    8876 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 05:34:44.553539    8876 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 05:34:44.553597    8876 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 05:34:44.559010    8876 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:34:44.644546    8876 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 05:34:45.799713    8876 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.155169416s)
	I0816 05:34:45.799786    8876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0816 05:34:45.805905    8876 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0816 05:34:45.812401    8876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 05:34:45.817868    8876 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0816 05:34:45.898418    8876 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0816 05:34:45.975826    8876 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:34:46.058824    8876 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0816 05:34:46.065429    8876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 05:34:46.070290    8876 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:34:46.170840    8876 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0816 05:34:46.211562    8876 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0816 05:34:46.211657    8876 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0816 05:34:46.214657    8876 start.go:563] Will wait 60s for crictl version
	I0816 05:34:46.214718    8876 ssh_runner.go:195] Run: which crictl
	I0816 05:34:46.216194    8876 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 05:34:46.231421    8876 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0816 05:34:46.231494    8876 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 05:34:46.248322    8876 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 05:34:46.269837    8876 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0816 05:34:46.269905    8876 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0816 05:34:46.271313    8876 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 05:34:46.274964    8876 kubeadm.go:883] updating cluster {Name:stopped-upgrade-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51397 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0816 05:34:46.275008    8876 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0816 05:34:46.275048    8876 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 05:34:46.285343    8876 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0816 05:34:46.285352    8876 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0816 05:34:46.285400    8876 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0816 05:34:46.288467    8876 ssh_runner.go:195] Run: which lz4
	I0816 05:34:46.289711    8876 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 05:34:46.291014    8876 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 05:34:46.291023    8876 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0816 05:34:47.203882    8876 docker.go:649] duration metric: took 914.225583ms to copy over tarball
	I0816 05:34:47.203948    8876 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 05:34:48.385069    8876 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.181113667s)
	I0816 05:34:48.385087    8876 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 05:34:48.400586    8876 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0816 05:34:48.403459    8876 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0816 05:34:48.408431    8876 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:34:48.491824    8876 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 05:34:50.251911    8876 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.760099875s)
	I0816 05:34:50.252007    8876 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 05:34:50.268906    8876 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0816 05:34:50.268917    8876 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0816 05:34:50.268923    8876 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 05:34:50.272910    8876 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 05:34:50.274819    8876 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0816 05:34:50.276529    8876 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0816 05:34:50.276701    8876 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 05:34:50.278311    8876 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0816 05:34:50.278477    8876 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0816 05:34:50.279791    8876 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0816 05:34:50.280220    8876 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0816 05:34:50.281037    8876 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0816 05:34:50.281425    8876 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0816 05:34:50.282398    8876 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0816 05:34:50.282469    8876 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0816 05:34:50.283193    8876 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0816 05:34:50.283542    8876 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0816 05:34:50.284209    8876 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0816 05:34:50.284729    8876 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0816 05:34:50.752021    8876 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0816 05:34:50.752666    8876 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0816 05:34:50.759901    8876 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0816 05:34:50.768507    8876 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0816 05:34:50.768546    8876 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0816 05:34:50.768609    8876 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0816 05:34:50.768623    8876 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0816 05:34:50.768639    8876 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0816 05:34:50.768664    8876 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0816 05:34:50.778017    8876 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0816 05:34:50.784866    8876 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0816 05:34:50.784878    8876 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0816 05:34:50.784884    8876 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0816 05:34:50.784932    8876 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0816 05:34:50.796347    8876 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0816 05:34:50.796365    8876 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0816 05:34:50.796386    8876 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0816 05:34:50.796436    8876 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0816 05:34:50.798224    8876 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0816 05:34:50.801240    8876 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0816 05:34:50.802354    8876 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0816 05:34:50.810330    8876 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	W0816 05:34:50.811332    8876 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0816 05:34:50.811445    8876 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0816 05:34:50.816012    8876 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0816 05:34:50.816034    8876 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0816 05:34:50.816083    8876 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0816 05:34:50.819456    8876 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0816 05:34:50.819475    8876 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0816 05:34:50.819524    8876 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0816 05:34:50.834786    8876 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0816 05:34:50.834943    8876 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0816 05:34:50.834966    8876 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0816 05:34:50.835012    8876 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0816 05:34:50.843902    8876 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0816 05:34:50.844018    8876 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0816 05:34:50.851475    8876 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0816 05:34:50.851504    8876 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0816 05:34:50.851601    8876 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0816 05:34:50.851691    8876 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0816 05:34:50.853200    8876 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0816 05:34:50.853213    8876 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0816 05:34:50.875243    8876 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0816 05:34:50.875257    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0816 05:34:50.916855    8876 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0816 05:34:50.916875    8876 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0816 05:34:50.916893    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0816 05:34:50.953311    8876 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0816 05:34:51.009629    8876 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0816 05:34:51.009755    8876 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 05:34:51.022302    8876 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0816 05:34:51.022326    8876 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 05:34:51.022380    8876 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 05:34:51.037711    8876 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0816 05:34:51.037836    8876 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0816 05:34:51.039353    8876 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0816 05:34:51.039365    8876 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0816 05:34:51.071182    8876 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0816 05:34:51.071203    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0816 05:34:51.311146    8876 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0816 05:34:51.311189    8876 cache_images.go:92] duration metric: took 1.04227675s to LoadCachedImages
	W0816 05:34:51.311236    8876 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0816 05:34:51.311244    8876 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0816 05:34:51.311296    8876 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-972000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 05:34:51.311356    8876 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0816 05:34:51.325126    8876 cni.go:84] Creating CNI manager for ""
	I0816 05:34:51.325137    8876 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 05:34:51.325144    8876 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 05:34:51.325154    8876 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-972000 NodeName:stopped-upgrade-972000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 05:34:51.325235    8876 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-972000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 05:34:51.325285    8876 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0816 05:34:51.328176    8876 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 05:34:51.328206    8876 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 05:34:51.330735    8876 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0816 05:34:51.335544    8876 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 05:34:51.340679    8876 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0816 05:34:51.346176    8876 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0816 05:34:51.347478    8876 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 05:34:51.350946    8876 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:34:51.428081    8876 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 05:34:51.433495    8876 certs.go:68] Setting up /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000 for IP: 10.0.2.15
	I0816 05:34:51.433502    8876 certs.go:194] generating shared ca certs ...
	I0816 05:34:51.433510    8876 certs.go:226] acquiring lock for ca certs: {Name:mk6cf8af742115923453a119a0b968ea241ec803 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:34:51.433677    8876 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19423-6249/.minikube/ca.key
	I0816 05:34:51.433728    8876 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19423-6249/.minikube/proxy-client-ca.key
	I0816 05:34:51.433736    8876 certs.go:256] generating profile certs ...
	I0816 05:34:51.433809    8876 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000/client.key
	I0816 05:34:51.433826    8876 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000/apiserver.key.1ac75644
	I0816 05:34:51.433839    8876 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000/apiserver.crt.1ac75644 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0816 05:34:51.488062    8876 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000/apiserver.crt.1ac75644 ...
	I0816 05:34:51.488074    8876 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000/apiserver.crt.1ac75644: {Name:mkaad8b00746cefd9f64ceee91316d9444dd95e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:34:51.488705    8876 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000/apiserver.key.1ac75644 ...
	I0816 05:34:51.488712    8876 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000/apiserver.key.1ac75644: {Name:mk3df119846dcced9aba850eb0346c334139cbfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:34:51.488882    8876 certs.go:381] copying /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000/apiserver.crt.1ac75644 -> /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000/apiserver.crt
	I0816 05:34:51.489022    8876 certs.go:385] copying /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000/apiserver.key.1ac75644 -> /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000/apiserver.key
	I0816 05:34:51.489178    8876 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000/proxy-client.key
	I0816 05:34:51.489307    8876 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/6746.pem (1338 bytes)
	W0816 05:34:51.489337    8876 certs.go:480] ignoring /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/6746_empty.pem, impossibly tiny 0 bytes
	I0816 05:34:51.489346    8876 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 05:34:51.489365    8876 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem (1082 bytes)
	I0816 05:34:51.489383    8876 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem (1123 bytes)
	I0816 05:34:51.489403    8876 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/key.pem (1679 bytes)
	I0816 05:34:51.489448    8876 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-6249/.minikube/files/etc/ssl/certs/67462.pem (1708 bytes)
	I0816 05:34:51.489780    8876 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 05:34:51.496600    8876 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 05:34:51.502904    8876 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 05:34:51.510032    8876 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 05:34:51.516895    8876 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0816 05:34:51.523752    8876 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 05:34:51.530762    8876 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 05:34:51.538022    8876 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 05:34:51.545487    8876 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 05:34:51.552558    8876 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/6746.pem --> /usr/share/ca-certificates/6746.pem (1338 bytes)
	I0816 05:34:51.559218    8876 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-6249/.minikube/files/etc/ssl/certs/67462.pem --> /usr/share/ca-certificates/67462.pem (1708 bytes)
	I0816 05:34:51.566492    8876 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 05:34:51.571720    8876 ssh_runner.go:195] Run: openssl version
	I0816 05:34:51.573546    8876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 05:34:51.576719    8876 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 05:34:51.578222    8876 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:30 /usr/share/ca-certificates/minikubeCA.pem
	I0816 05:34:51.578243    8876 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 05:34:51.580168    8876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 05:34:51.583074    8876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6746.pem && ln -fs /usr/share/ca-certificates/6746.pem /etc/ssl/certs/6746.pem"
	I0816 05:34:51.586574    8876 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6746.pem
	I0816 05:34:51.587986    8876 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:20 /usr/share/ca-certificates/6746.pem
	I0816 05:34:51.588005    8876 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6746.pem
	I0816 05:34:51.589814    8876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6746.pem /etc/ssl/certs/51391683.0"
	I0816 05:34:51.592960    8876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67462.pem && ln -fs /usr/share/ca-certificates/67462.pem /etc/ssl/certs/67462.pem"
	I0816 05:34:51.595840    8876 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67462.pem
	I0816 05:34:51.597266    8876 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:20 /usr/share/ca-certificates/67462.pem
	I0816 05:34:51.597286    8876 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67462.pem
	I0816 05:34:51.598985    8876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67462.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 05:34:51.602377    8876 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 05:34:51.603797    8876 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 05:34:51.606051    8876 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 05:34:51.608155    8876 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 05:34:51.610132    8876 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 05:34:51.612077    8876 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 05:34:51.613793    8876 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 05:34:51.615676    8876 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51397 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0816 05:34:51.615745    8876 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0816 05:34:51.626359    8876 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 05:34:51.629899    8876 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 05:34:51.629905    8876 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 05:34:51.629929    8876 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 05:34:51.632699    8876 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 05:34:51.632978    8876 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-972000" does not appear in /Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:34:51.633068    8876 kubeconfig.go:62] /Users/jenkins/minikube-integration/19423-6249/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-972000" cluster setting kubeconfig missing "stopped-upgrade-972000" context setting]
	I0816 05:34:51.633280    8876 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-6249/kubeconfig: {Name:mka7b2a1dac03f0ea4ac28563b4fe884a2b1b206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:34:51.633717    8876 kapi.go:59] client config for stopped-upgrade-972000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000/client.key", CAFile:"/Users/jenkins/minikube-integration/19423-6249/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101e55610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 05:34:51.634049    8876 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 05:34:51.636608    8876 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-972000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0816 05:34:51.636614    8876 kubeadm.go:1160] stopping kube-system containers ...
	I0816 05:34:51.636656    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0816 05:34:51.647351    8876 docker.go:483] Stopping containers: [d49ec1605243 02153e39f839 a54c050fa5fd d464a7742a93 753544007c33 fdf37f08503a a3b3052a7b8a e3381be358f6]
	I0816 05:34:51.647424    8876 ssh_runner.go:195] Run: docker stop d49ec1605243 02153e39f839 a54c050fa5fd d464a7742a93 753544007c33 fdf37f08503a a3b3052a7b8a e3381be358f6
	I0816 05:34:51.658407    8876 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 05:34:51.664163    8876 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 05:34:51.666846    8876 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 05:34:51.666853    8876 kubeadm.go:157] found existing configuration files:
	
	I0816 05:34:51.666873    8876 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51397 /etc/kubernetes/admin.conf
	I0816 05:34:51.669762    8876 kubeadm.go:163] "https://control-plane.minikube.internal:51397" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51397 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 05:34:51.669794    8876 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 05:34:51.672452    8876 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51397 /etc/kubernetes/kubelet.conf
	I0816 05:34:51.674866    8876 kubeadm.go:163] "https://control-plane.minikube.internal:51397" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51397 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 05:34:51.674885    8876 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 05:34:51.678050    8876 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51397 /etc/kubernetes/controller-manager.conf
	I0816 05:34:51.680874    8876 kubeadm.go:163] "https://control-plane.minikube.internal:51397" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51397 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 05:34:51.680893    8876 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 05:34:51.683426    8876 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51397 /etc/kubernetes/scheduler.conf
	I0816 05:34:51.686161    8876 kubeadm.go:163] "https://control-plane.minikube.internal:51397" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51397 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 05:34:51.686186    8876 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 05:34:51.689151    8876 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 05:34:51.691930    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 05:34:51.716608    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 05:34:52.433962    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 05:34:52.570851    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 05:34:52.602858    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 05:34:52.624166    8876 api_server.go:52] waiting for apiserver process to appear ...
	I0816 05:34:52.624245    8876 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 05:34:53.125192    8876 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 05:34:53.626298    8876 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 05:34:53.630342    8876 api_server.go:72] duration metric: took 1.006194667s to wait for apiserver process to appear ...
	I0816 05:34:53.630353    8876 api_server.go:88] waiting for apiserver healthz status ...
	I0816 05:34:53.630368    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:34:58.632362    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:34:58.632383    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:35:03.632494    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:35:03.632544    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:35:08.632884    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:35:08.632923    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:35:13.633269    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:35:13.633293    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:35:18.633882    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:35:18.633918    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:35:23.634619    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:35:23.634668    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:35:28.635771    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:35:28.635825    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:35:33.637003    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:35:33.637029    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:35:38.638503    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:35:38.638534    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:35:43.639576    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:35:43.639620    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:35:48.641847    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:35:48.641869    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:35:53.643210    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:35:53.643380    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:35:53.655584    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:35:53.655660    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:35:53.666552    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:35:53.666628    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:35:53.676664    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:35:53.676735    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:35:53.686902    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:35:53.686975    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:35:53.697761    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:35:53.697831    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:35:53.711032    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:35:53.711128    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:35:53.721464    8876 logs.go:276] 0 containers: []
	W0816 05:35:53.721475    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:35:53.721535    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:35:53.731758    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:35:53.731780    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:35:53.731788    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:35:53.745735    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:35:53.745747    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:35:53.757664    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:35:53.757674    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:35:53.774338    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:35:53.774348    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:35:53.785895    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:35:53.785905    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:35:53.798248    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:35:53.798259    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:35:53.838444    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:35:53.838457    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:35:53.854221    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:35:53.854237    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:35:53.871994    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:35:53.872007    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:35:53.896995    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:35:53.897002    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:35:53.915355    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:35:53.915365    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:35:54.025248    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:35:54.025260    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:35:54.029303    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:35:54.029309    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:35:54.043538    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:35:54.043554    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:35:54.059329    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:35:54.059339    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:35:54.071726    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:35:54.071737    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:35:54.084994    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:35:54.085008    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:35:56.629300    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:36:01.629505    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:36:01.629664    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:36:01.644333    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:36:01.644408    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:36:01.664607    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:36:01.664685    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:36:01.675912    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:36:01.675987    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:36:01.686612    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:36:01.686683    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:36:01.697236    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:36:01.697305    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:36:01.708554    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:36:01.708623    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:36:01.718498    8876 logs.go:276] 0 containers: []
	W0816 05:36:01.718512    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:36:01.718576    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:36:01.729926    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:36:01.729943    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:36:01.729949    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:36:01.743316    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:36:01.743326    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:36:01.755565    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:36:01.755576    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:36:01.769786    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:36:01.769796    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:36:01.781597    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:36:01.781609    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:36:01.795852    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:36:01.795863    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:36:01.800746    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:36:01.800752    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:36:01.811760    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:36:01.811771    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:36:01.826745    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:36:01.826756    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:36:01.843647    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:36:01.843657    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:36:01.856593    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:36:01.856603    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:36:01.895426    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:36:01.895435    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:36:01.933863    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:36:01.933874    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:36:01.972091    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:36:01.972101    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:36:01.997906    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:36:01.997917    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:36:02.017932    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:36:02.017943    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:36:02.038915    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:36:02.038926    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:36:04.554215    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:36:09.556466    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:36:09.556662    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:36:09.575069    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:36:09.575166    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:36:09.588138    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:36:09.588215    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:36:09.602480    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:36:09.602553    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:36:09.617430    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:36:09.617526    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:36:09.628065    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:36:09.628132    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:36:09.638721    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:36:09.638796    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:36:09.648761    8876 logs.go:276] 0 containers: []
	W0816 05:36:09.648772    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:36:09.648834    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:36:09.659317    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:36:09.659335    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:36:09.659341    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:36:09.673808    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:36:09.673819    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:36:09.690852    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:36:09.690864    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:36:09.702010    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:36:09.702022    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:36:09.716344    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:36:09.716353    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:36:09.727359    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:36:09.727370    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:36:09.752995    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:36:09.753008    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:36:09.791158    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:36:09.791167    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:36:09.795704    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:36:09.795712    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:36:09.836361    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:36:09.836372    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:36:09.850331    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:36:09.850340    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:36:09.862626    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:36:09.862636    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:36:09.874081    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:36:09.874092    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:36:09.891084    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:36:09.891095    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:36:09.904187    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:36:09.904201    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:36:09.937383    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:36:09.937394    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:36:09.951878    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:36:09.951888    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:36:12.467349    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:36:17.469629    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:36:17.469819    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:36:17.491881    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:36:17.491979    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:36:17.507036    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:36:17.507120    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:36:17.519393    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:36:17.519469    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:36:17.530526    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:36:17.530592    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:36:17.540538    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:36:17.540600    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:36:17.550880    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:36:17.550957    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:36:17.561360    8876 logs.go:276] 0 containers: []
	W0816 05:36:17.561370    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:36:17.561427    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:36:17.571710    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:36:17.571727    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:36:17.571733    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:36:17.610895    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:36:17.610905    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:36:17.625336    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:36:17.625349    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:36:17.637310    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:36:17.637321    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:36:17.652596    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:36:17.652608    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:36:17.663890    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:36:17.663905    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:36:17.676831    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:36:17.676843    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:36:17.680988    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:36:17.680994    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:36:17.719436    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:36:17.719451    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:36:17.733528    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:36:17.733539    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:36:17.747653    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:36:17.747664    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:36:17.759433    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:36:17.759447    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:36:17.772140    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:36:17.772153    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:36:17.811879    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:36:17.811896    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:36:17.827318    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:36:17.827333    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:36:17.845346    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:36:17.845357    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:36:17.869640    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:36:17.869649    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:36:20.383174    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:36:25.385708    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:36:25.385909    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:36:25.402508    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:36:25.402595    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:36:25.415773    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:36:25.415848    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:36:25.426886    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:36:25.426958    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:36:25.437551    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:36:25.437628    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:36:25.447756    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:36:25.447823    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:36:25.457952    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:36:25.458026    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:36:25.468065    8876 logs.go:276] 0 containers: []
	W0816 05:36:25.468077    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:36:25.468137    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:36:25.478521    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:36:25.478541    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:36:25.478546    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:36:25.501488    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:36:25.501506    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:36:25.516184    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:36:25.516200    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:36:25.529856    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:36:25.529867    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:36:25.552090    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:36:25.552101    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:36:25.566359    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:36:25.566368    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:36:25.581034    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:36:25.581045    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:36:25.598368    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:36:25.598378    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:36:25.610047    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:36:25.610057    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:36:25.634594    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:36:25.634605    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:36:25.670222    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:36:25.670232    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:36:25.681396    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:36:25.681408    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:36:25.685666    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:36:25.685672    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:36:25.696940    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:36:25.696952    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:36:25.709173    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:36:25.709187    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:36:25.721105    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:36:25.721118    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:36:25.760300    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:36:25.760308    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:36:28.300287    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:36:33.300569    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:36:33.300717    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:36:33.317112    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:36:33.317201    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:36:33.331176    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:36:33.331252    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:36:33.342051    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:36:33.342120    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:36:33.353285    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:36:33.353354    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:36:33.363453    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:36:33.363522    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:36:33.373845    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:36:33.373905    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:36:33.383953    8876 logs.go:276] 0 containers: []
	W0816 05:36:33.383966    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:36:33.384029    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:36:33.394814    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:36:33.394830    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:36:33.394836    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:36:33.409205    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:36:33.409216    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:36:33.424235    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:36:33.424245    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:36:33.440056    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:36:33.440070    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:36:33.478150    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:36:33.478163    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:36:33.494012    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:36:33.494023    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:36:33.506022    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:36:33.506034    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:36:33.529615    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:36:33.529624    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:36:33.567836    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:36:33.567849    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:36:33.583180    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:36:33.583193    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:36:33.601323    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:36:33.601334    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:36:33.613281    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:36:33.613294    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:36:33.625861    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:36:33.625874    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:36:33.637223    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:36:33.637234    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:36:33.651079    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:36:33.651093    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:36:33.664395    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:36:33.664405    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:36:33.668709    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:36:33.668718    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:36:36.205655    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:36:41.208031    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:36:41.208147    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:36:41.220958    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:36:41.221026    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:36:41.231946    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:36:41.232024    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:36:41.250015    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:36:41.250083    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:36:41.260778    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:36:41.260850    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:36:41.273400    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:36:41.273474    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:36:41.284494    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:36:41.284567    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:36:41.295392    8876 logs.go:276] 0 containers: []
	W0816 05:36:41.295408    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:36:41.295470    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:36:41.309808    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:36:41.309829    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:36:41.309835    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:36:41.345522    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:36:41.345533    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:36:41.359163    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:36:41.359175    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:36:41.373517    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:36:41.373528    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:36:41.377714    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:36:41.377723    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:36:41.392463    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:36:41.392473    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:36:41.431321    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:36:41.431332    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:36:41.445731    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:36:41.445740    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:36:41.460404    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:36:41.460414    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:36:41.475292    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:36:41.475302    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:36:41.498921    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:36:41.498934    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:36:41.537346    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:36:41.537360    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:36:41.549070    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:36:41.549080    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:36:41.560698    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:36:41.560724    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:36:41.572477    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:36:41.572490    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:36:41.593373    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:36:41.593384    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:36:41.610782    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:36:41.610792    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:36:44.124362    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:36:49.126559    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:36:49.126663    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:36:49.138547    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:36:49.138619    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:36:49.149330    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:36:49.149392    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:36:49.159221    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:36:49.159284    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:36:49.169954    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:36:49.170030    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:36:49.180711    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:36:49.180786    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:36:49.191848    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:36:49.191916    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:36:49.201737    8876 logs.go:276] 0 containers: []
	W0816 05:36:49.201750    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:36:49.201815    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:36:49.212630    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:36:49.212647    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:36:49.212655    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:36:49.217041    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:36:49.217050    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:36:49.231305    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:36:49.231317    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:36:49.268814    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:36:49.268827    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:36:49.280454    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:36:49.280467    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:36:49.293015    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:36:49.293026    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:36:49.304060    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:36:49.304069    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:36:49.318786    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:36:49.318798    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:36:49.334057    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:36:49.334068    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:36:49.351945    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:36:49.351956    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:36:49.363836    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:36:49.363849    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:36:49.375785    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:36:49.375797    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:36:49.414266    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:36:49.414276    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:36:49.428617    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:36:49.428630    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:36:49.465125    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:36:49.465136    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:36:49.479083    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:36:49.479096    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:36:49.492898    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:36:49.492910    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:36:52.020408    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:36:57.022663    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:36:57.022788    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:36:57.034057    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:36:57.034138    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:36:57.044880    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:36:57.044956    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:36:57.056704    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:36:57.056776    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:36:57.067478    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:36:57.067540    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:36:57.077997    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:36:57.078070    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:36:57.089334    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:36:57.089434    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:36:57.101389    8876 logs.go:276] 0 containers: []
	W0816 05:36:57.101401    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:36:57.101467    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:36:57.111525    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:36:57.111545    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:36:57.111551    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:36:57.150157    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:36:57.150169    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:36:57.188871    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:36:57.188882    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:36:57.203145    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:36:57.203156    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:36:57.218234    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:36:57.218244    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:36:57.231364    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:36:57.231379    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:36:57.244939    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:36:57.244952    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:36:57.249099    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:36:57.249109    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:36:57.260769    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:36:57.260781    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:36:57.272590    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:36:57.272600    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:36:57.284145    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:36:57.284158    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:36:57.296160    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:36:57.296175    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:36:57.308015    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:36:57.308026    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:36:57.333000    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:36:57.333008    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:36:57.346942    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:36:57.346952    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:36:57.386740    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:36:57.386767    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:36:57.400861    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:36:57.400872    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:36:59.920147    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:37:04.922746    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:37:04.922915    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:37:04.938457    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:37:04.938548    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:37:04.950681    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:37:04.950754    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:37:04.961992    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:37:04.962065    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:37:04.972988    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:37:04.973067    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:37:04.986608    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:37:04.986679    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:37:04.997969    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:37:04.998046    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:37:05.008199    8876 logs.go:276] 0 containers: []
	W0816 05:37:05.008213    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:37:05.008277    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:37:05.018652    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:37:05.018674    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:37:05.018681    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:37:05.032705    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:37:05.032715    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:37:05.043740    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:37:05.043753    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:37:05.055266    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:37:05.055276    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:37:05.060004    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:37:05.060011    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:37:05.098416    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:37:05.098428    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:37:05.112552    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:37:05.112566    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:37:05.148745    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:37:05.148758    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:37:05.160862    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:37:05.160872    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:37:05.172458    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:37:05.172472    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:37:05.183553    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:37:05.183567    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:37:05.208647    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:37:05.208657    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:37:05.220026    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:37:05.220038    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:37:05.260609    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:37:05.260629    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:37:05.275495    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:37:05.275507    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:37:05.290557    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:37:05.290569    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:37:05.308766    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:37:05.308776    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:37:07.823833    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:37:12.826118    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:37:12.826335    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:37:12.842295    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:37:12.842379    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:37:12.855416    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:37:12.855491    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:37:12.866521    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:37:12.866596    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:37:12.876704    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:37:12.876768    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:37:12.887584    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:37:12.887659    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:37:12.898689    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:37:12.898754    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:37:12.908933    8876 logs.go:276] 0 containers: []
	W0816 05:37:12.908943    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:37:12.908997    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:37:12.919298    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:37:12.919315    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:37:12.919320    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:37:12.933860    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:37:12.933872    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:37:12.945243    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:37:12.945255    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:37:12.963033    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:37:12.963045    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:37:12.974187    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:37:12.974197    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:37:12.999004    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:37:12.999015    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:37:13.011235    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:37:13.011246    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:37:13.049121    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:37:13.049132    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:37:13.063346    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:37:13.063357    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:37:13.074877    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:37:13.074887    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:37:13.089646    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:37:13.089658    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:37:13.093707    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:37:13.093713    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:37:13.128098    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:37:13.128113    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:37:13.143238    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:37:13.143247    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:37:13.156081    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:37:13.156096    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:37:13.193045    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:37:13.193055    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:37:13.211426    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:37:13.211442    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:37:15.724691    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:37:20.727081    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:37:20.727448    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:37:20.766315    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:37:20.766453    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:37:20.788036    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:37:20.788145    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:37:20.808091    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:37:20.808165    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:37:20.824910    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:37:20.824982    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:37:20.836256    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:37:20.836325    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:37:20.847170    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:37:20.847237    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:37:20.857620    8876 logs.go:276] 0 containers: []
	W0816 05:37:20.857636    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:37:20.857695    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:37:20.868364    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:37:20.868381    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:37:20.868387    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:37:20.907534    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:37:20.907545    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:37:20.942803    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:37:20.942813    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:37:20.982421    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:37:20.982435    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:37:20.996810    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:37:20.996821    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:37:21.012812    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:37:21.012823    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:37:21.024422    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:37:21.024432    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:37:21.042630    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:37:21.042640    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:37:21.055677    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:37:21.055693    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:37:21.070757    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:37:21.070768    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:37:21.081753    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:37:21.081763    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:37:21.106035    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:37:21.106045    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:37:21.110018    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:37:21.110027    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:37:21.121522    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:37:21.121534    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:37:21.133343    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:37:21.133354    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:37:21.153411    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:37:21.153425    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:37:21.165087    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:37:21.165098    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:37:23.678636    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:37:28.680966    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:37:28.681179    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:37:28.705731    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:37:28.705851    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:37:28.722128    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:37:28.722207    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:37:28.739420    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:37:28.739498    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:37:28.750897    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:37:28.750970    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:37:28.761392    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:37:28.761461    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:37:28.771569    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:37:28.771639    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:37:28.781408    8876 logs.go:276] 0 containers: []
	W0816 05:37:28.781419    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:37:28.781487    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:37:28.796165    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:37:28.796181    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:37:28.796187    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:37:28.808111    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:37:28.808122    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:37:28.844548    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:37:28.844563    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:37:28.858090    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:37:28.858100    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:37:28.869433    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:37:28.869442    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:37:28.885501    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:37:28.885518    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:37:28.889630    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:37:28.889638    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:37:28.927547    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:37:28.927559    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:37:28.943539    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:37:28.943550    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:37:28.964792    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:37:28.964803    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:37:28.976456    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:37:28.976468    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:37:28.994172    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:37:28.994182    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:37:29.006248    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:37:29.006259    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:37:29.020398    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:37:29.020408    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:37:29.031817    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:37:29.031829    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:37:29.056148    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:37:29.056155    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:37:29.095476    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:37:29.095488    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:37:31.611894    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:37:36.614085    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:37:36.614309    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:37:36.632929    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:37:36.633029    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:37:36.646500    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:37:36.646575    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:37:36.658404    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:37:36.658504    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:37:36.670491    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:37:36.670561    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:37:36.680998    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:37:36.681071    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:37:36.692528    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:37:36.692593    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:37:36.703003    8876 logs.go:276] 0 containers: []
	W0816 05:37:36.703022    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:37:36.703079    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:37:36.713153    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:37:36.713170    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:37:36.713176    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:37:36.750845    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:37:36.750859    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:37:36.764072    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:37:36.764084    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:37:36.775654    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:37:36.775665    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:37:36.780504    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:37:36.780515    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:37:36.797991    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:37:36.798000    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:37:36.810758    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:37:36.810770    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:37:36.822161    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:37:36.822173    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:37:36.843589    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:37:36.843606    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:37:36.882930    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:37:36.882943    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:37:36.921613    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:37:36.921624    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:37:36.935281    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:37:36.935292    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:37:36.946426    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:37:36.946438    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:37:36.971341    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:37:36.971352    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:37:36.987578    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:37:36.987589    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:37:37.002437    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:37:37.002448    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:37:37.022041    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:37:37.022053    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:37:39.540111    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:37:44.542291    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:37:44.542480    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:37:44.557337    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:37:44.557417    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:37:44.568680    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:37:44.568753    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:37:44.578998    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:37:44.579082    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:37:44.589414    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:37:44.589489    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:37:44.599790    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:37:44.599859    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:37:44.610277    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:37:44.610343    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:37:44.620686    8876 logs.go:276] 0 containers: []
	W0816 05:37:44.620698    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:37:44.620762    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:37:44.631919    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:37:44.631936    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:37:44.631942    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:37:44.646204    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:37:44.646217    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:37:44.659897    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:37:44.659909    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:37:44.671033    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:37:44.671045    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:37:44.685760    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:37:44.685771    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:37:44.698572    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:37:44.698583    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:37:44.721409    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:37:44.721416    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:37:44.733317    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:37:44.733328    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:37:44.767571    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:37:44.767582    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:37:44.779123    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:37:44.779135    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:37:44.818139    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:37:44.818149    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:37:44.822413    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:37:44.822421    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:37:44.836980    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:37:44.836990    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:37:44.850055    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:37:44.850066    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:37:44.867208    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:37:44.867219    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:37:44.879107    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:37:44.879117    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:37:44.916498    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:37:44.916510    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:37:47.430997    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:37:52.433197    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:37:52.433324    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:37:52.447047    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:37:52.447129    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:37:52.459109    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:37:52.459184    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:37:52.469597    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:37:52.469673    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:37:52.481845    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:37:52.481916    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:37:52.492477    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:37:52.492547    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:37:52.503692    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:37:52.503758    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:37:52.522414    8876 logs.go:276] 0 containers: []
	W0816 05:37:52.522426    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:37:52.522488    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:37:52.533423    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:37:52.533442    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:37:52.533448    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:37:52.545060    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:37:52.545071    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:37:52.559502    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:37:52.559514    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:37:52.582339    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:37:52.582358    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:37:52.587747    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:37:52.587757    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:37:52.603044    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:37:52.603055    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:37:52.617571    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:37:52.617581    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:37:52.629558    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:37:52.629568    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:37:52.642993    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:37:52.643004    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:37:52.662599    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:37:52.662609    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:37:52.675845    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:37:52.675859    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:37:52.687113    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:37:52.687124    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:37:52.723606    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:37:52.723616    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:37:52.758397    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:37:52.758409    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:37:52.770089    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:37:52.770099    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:37:52.781819    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:37:52.781828    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:37:52.819809    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:37:52.819820    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:37:55.333628    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:38:00.335892    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:38:00.336061    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:38:00.353608    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:38:00.353694    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:38:00.369289    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:38:00.369362    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:38:00.380135    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:38:00.380215    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:38:00.391556    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:38:00.391631    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:38:00.401688    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:38:00.401756    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:38:00.411938    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:38:00.412012    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:38:00.422946    8876 logs.go:276] 0 containers: []
	W0816 05:38:00.422957    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:38:00.423017    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:38:00.433763    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:38:00.433779    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:38:00.433784    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:38:00.447924    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:38:00.447937    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:38:00.462667    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:38:00.462679    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:38:00.478505    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:38:00.478516    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:38:00.496039    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:38:00.496052    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:38:00.511030    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:38:00.511043    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:38:00.526194    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:38:00.526205    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:38:00.563857    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:38:00.563869    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:38:00.578883    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:38:00.578896    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:38:00.591042    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:38:00.591054    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:38:00.602344    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:38:00.602354    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:38:00.626608    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:38:00.626618    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:38:00.641141    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:38:00.641152    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:38:00.658363    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:38:00.658375    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:38:00.696807    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:38:00.696817    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:38:00.700747    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:38:00.700754    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:38:00.737943    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:38:00.737956    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:38:03.250206    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:38:08.251105    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:38:08.251373    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:38:08.280321    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:38:08.280450    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:38:08.298230    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:38:08.298327    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:38:08.311960    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:38:08.312043    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:38:08.325200    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:38:08.325280    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:38:08.335855    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:38:08.335919    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:38:08.346658    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:38:08.346726    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:38:08.360841    8876 logs.go:276] 0 containers: []
	W0816 05:38:08.360851    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:38:08.360914    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:38:08.371096    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:38:08.371112    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:38:08.371117    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:38:08.411491    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:38:08.411503    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:38:08.457531    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:38:08.457546    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:38:08.472006    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:38:08.472018    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:38:08.486245    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:38:08.486256    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:38:08.501341    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:38:08.501351    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:38:08.525720    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:38:08.525728    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:38:08.530252    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:38:08.530259    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:38:08.548715    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:38:08.548728    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:38:08.565089    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:38:08.565099    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:38:08.576874    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:38:08.576886    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:38:08.588520    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:38:08.588531    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:38:08.600042    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:38:08.600052    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:38:08.637886    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:38:08.637902    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:38:08.652286    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:38:08.652299    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:38:08.663525    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:38:08.663536    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:38:08.675854    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:38:08.675870    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:38:11.189009    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:38:16.191427    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:38:16.191894    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:38:16.235123    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:38:16.235283    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:38:16.256143    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:38:16.256248    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:38:16.270580    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:38:16.270660    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:38:16.284597    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:38:16.284680    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:38:16.295196    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:38:16.295271    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:38:16.310405    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:38:16.310476    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:38:16.321949    8876 logs.go:276] 0 containers: []
	W0816 05:38:16.321964    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:38:16.322030    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:38:16.332836    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:38:16.332853    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:38:16.332859    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:38:16.351795    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:38:16.351806    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:38:16.363727    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:38:16.363738    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:38:16.387429    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:38:16.387439    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:38:16.426041    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:38:16.426051    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:38:16.440292    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:38:16.440303    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:38:16.455449    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:38:16.455462    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:38:16.468381    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:38:16.468392    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:38:16.506160    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:38:16.506171    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:38:16.517481    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:38:16.517493    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:38:16.528974    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:38:16.528984    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:38:16.546132    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:38:16.546144    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:38:16.558697    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:38:16.558710    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:38:16.572864    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:38:16.572875    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:38:16.613364    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:38:16.613378    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:38:16.627042    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:38:16.627055    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:38:16.644412    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:38:16.644426    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:38:19.150445    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:38:24.152710    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:38:24.152913    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:38:24.172474    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:38:24.172570    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:38:24.187985    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:38:24.188068    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:38:24.200485    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:38:24.200562    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:38:24.210885    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:38:24.210952    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:38:24.221589    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:38:24.221658    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:38:24.232752    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:38:24.232826    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:38:24.243070    8876 logs.go:276] 0 containers: []
	W0816 05:38:24.243081    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:38:24.243142    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:38:24.253398    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:38:24.253415    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:38:24.253420    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:38:24.268046    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:38:24.268056    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:38:24.280294    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:38:24.280305    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:38:24.284673    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:38:24.284679    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:38:24.298519    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:38:24.298533    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:38:24.340237    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:38:24.340249    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:38:24.350884    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:38:24.350896    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:38:24.373072    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:38:24.373082    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:38:24.407916    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:38:24.407931    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:38:24.426982    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:38:24.426996    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:38:24.438789    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:38:24.438801    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:38:24.453595    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:38:24.453606    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:38:24.465237    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:38:24.465247    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:38:24.478722    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:38:24.478735    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:38:24.497291    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:38:24.497304    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:38:24.511269    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:38:24.511280    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:38:24.523081    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:38:24.523095    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:38:27.061442    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:38:32.061963    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:38:32.062233    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:38:32.087970    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:38:32.088095    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:38:32.104648    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:38:32.104739    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:38:32.117991    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:38:32.118071    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:38:32.129767    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:38:32.129847    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:38:32.141621    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:38:32.141690    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:38:32.152215    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:38:32.152287    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:38:32.162657    8876 logs.go:276] 0 containers: []
	W0816 05:38:32.162669    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:38:32.162734    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:38:32.173257    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:38:32.173274    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:38:32.173281    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:38:32.187422    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:38:32.187432    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:38:32.204777    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:38:32.204789    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:38:32.216090    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:38:32.216104    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:38:32.229942    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:38:32.229951    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:38:32.243960    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:38:32.243970    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:38:32.259521    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:38:32.259533    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:38:32.272445    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:38:32.272456    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:38:32.285075    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:38:32.285089    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:38:32.319670    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:38:32.319684    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:38:32.331660    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:38:32.331673    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:38:32.344999    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:38:32.345009    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:38:32.367505    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:38:32.367516    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:38:32.378405    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:38:32.378417    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:38:32.382473    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:38:32.382482    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:38:32.420836    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:38:32.420848    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:38:32.439734    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:38:32.439747    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:38:34.980610    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:38:39.983090    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:38:39.983220    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:38:39.994626    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:38:39.994709    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:38:40.005604    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:38:40.005681    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:38:40.016941    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:38:40.017015    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:38:40.027413    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:38:40.027494    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:38:40.038201    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:38:40.038265    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:38:40.048434    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:38:40.048507    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:38:40.059056    8876 logs.go:276] 0 containers: []
	W0816 05:38:40.059069    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:38:40.059131    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:38:40.069480    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:38:40.069498    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:38:40.069503    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:38:40.083818    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:38:40.083831    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:38:40.099114    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:38:40.099124    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:38:40.116270    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:38:40.116282    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:38:40.131266    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:38:40.131279    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:38:40.143226    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:38:40.143239    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:38:40.155137    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:38:40.155151    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:38:40.176606    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:38:40.176616    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:38:40.180914    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:38:40.180923    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:38:40.216998    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:38:40.217010    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:38:40.254764    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:38:40.254775    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:38:40.269362    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:38:40.269376    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:38:40.281061    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:38:40.281072    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:38:40.294646    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:38:40.294662    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:38:40.306530    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:38:40.306541    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:38:40.344038    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:38:40.344047    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:38:40.357900    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:38:40.357913    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:38:42.871300    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:38:47.874088    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:38:47.874277    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:38:47.893724    8876 logs.go:276] 2 containers: [2881150c8a81 a54c050fa5fd]
	I0816 05:38:47.893817    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:38:47.906901    8876 logs.go:276] 2 containers: [b9e947a22443 d464a7742a93]
	I0816 05:38:47.906978    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:38:47.918837    8876 logs.go:276] 1 containers: [c05e15f409ec]
	I0816 05:38:47.918902    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:38:47.929442    8876 logs.go:276] 2 containers: [f095175f88f2 d49ec1605243]
	I0816 05:38:47.929517    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:38:47.939539    8876 logs.go:276] 1 containers: [b161cd345913]
	I0816 05:38:47.939610    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:38:47.950792    8876 logs.go:276] 2 containers: [2c32b35f94e1 753544007c33]
	I0816 05:38:47.950867    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:38:47.961402    8876 logs.go:276] 0 containers: []
	W0816 05:38:47.961414    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:38:47.961472    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:38:47.971652    8876 logs.go:276] 2 containers: [d2bb065132a8 8de666a5125d]
	I0816 05:38:47.971668    8876 logs.go:123] Gathering logs for kube-apiserver [a54c050fa5fd] ...
	I0816 05:38:47.971673    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54c050fa5fd"
	I0816 05:38:48.010559    8876 logs.go:123] Gathering logs for etcd [d464a7742a93] ...
	I0816 05:38:48.010570    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d464a7742a93"
	I0816 05:38:48.025255    8876 logs.go:123] Gathering logs for kube-scheduler [d49ec1605243] ...
	I0816 05:38:48.025271    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d49ec1605243"
	I0816 05:38:48.039777    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:38:48.039787    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:38:48.078213    8876 logs.go:123] Gathering logs for etcd [b9e947a22443] ...
	I0816 05:38:48.078220    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9e947a22443"
	I0816 05:38:48.091811    8876 logs.go:123] Gathering logs for kube-proxy [b161cd345913] ...
	I0816 05:38:48.091824    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b161cd345913"
	I0816 05:38:48.103312    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:38:48.103322    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:38:48.137459    8876 logs.go:123] Gathering logs for kube-apiserver [2881150c8a81] ...
	I0816 05:38:48.137471    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2881150c8a81"
	I0816 05:38:48.154848    8876 logs.go:123] Gathering logs for coredns [c05e15f409ec] ...
	I0816 05:38:48.154859    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c05e15f409ec"
	I0816 05:38:48.166141    8876 logs.go:123] Gathering logs for kube-scheduler [f095175f88f2] ...
	I0816 05:38:48.166151    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f095175f88f2"
	I0816 05:38:48.183374    8876 logs.go:123] Gathering logs for kube-controller-manager [2c32b35f94e1] ...
	I0816 05:38:48.183386    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c32b35f94e1"
	I0816 05:38:48.201326    8876 logs.go:123] Gathering logs for kube-controller-manager [753544007c33] ...
	I0816 05:38:48.201338    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 753544007c33"
	I0816 05:38:48.214734    8876 logs.go:123] Gathering logs for storage-provisioner [d2bb065132a8] ...
	I0816 05:38:48.214747    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bb065132a8"
	I0816 05:38:48.225928    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:38:48.225943    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:38:48.249274    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:38:48.249288    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:38:48.253802    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:38:48.253811    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:38:48.266180    8876 logs.go:123] Gathering logs for storage-provisioner [8de666a5125d] ...
	I0816 05:38:48.266192    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8de666a5125d"
	I0816 05:38:50.785997    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:38:55.788250    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:38:55.788310    8876 kubeadm.go:597] duration metric: took 4m4.162423416s to restartPrimaryControlPlane
	W0816 05:38:55.788368    8876 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 05:38:55.788390    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0816 05:38:56.797072    8876 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.008688292s)
	I0816 05:38:56.797153    8876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 05:38:56.802158    8876 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 05:38:56.804824    8876 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 05:38:56.807474    8876 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 05:38:56.807480    8876 kubeadm.go:157] found existing configuration files:
	
	I0816 05:38:56.807500    8876 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51397 /etc/kubernetes/admin.conf
	I0816 05:38:56.809893    8876 kubeadm.go:163] "https://control-plane.minikube.internal:51397" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51397 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 05:38:56.809915    8876 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 05:38:56.812745    8876 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51397 /etc/kubernetes/kubelet.conf
	I0816 05:38:56.816002    8876 kubeadm.go:163] "https://control-plane.minikube.internal:51397" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51397 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 05:38:56.816038    8876 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 05:38:56.818796    8876 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51397 /etc/kubernetes/controller-manager.conf
	I0816 05:38:56.821322    8876 kubeadm.go:163] "https://control-plane.minikube.internal:51397" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51397 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 05:38:56.821342    8876 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 05:38:56.824461    8876 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51397 /etc/kubernetes/scheduler.conf
	I0816 05:38:56.827563    8876 kubeadm.go:163] "https://control-plane.minikube.internal:51397" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51397 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 05:38:56.827585    8876 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 05:38:56.830330    8876 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 05:38:56.847746    8876 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0816 05:38:56.847775    8876 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 05:38:56.896700    8876 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 05:38:56.896786    8876 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 05:38:56.896858    8876 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 05:38:56.945877    8876 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 05:38:56.950081    8876 out.go:235]   - Generating certificates and keys ...
	I0816 05:38:56.950181    8876 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 05:38:56.950304    8876 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 05:38:56.950344    8876 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 05:38:56.950375    8876 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 05:38:56.950409    8876 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 05:38:56.950436    8876 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 05:38:56.950530    8876 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 05:38:56.950594    8876 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 05:38:56.950684    8876 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 05:38:56.950725    8876 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 05:38:56.950747    8876 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 05:38:56.950774    8876 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 05:38:57.006726    8876 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 05:38:57.046099    8876 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 05:38:57.194402    8876 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 05:38:57.297786    8876 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 05:38:57.325562    8876 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 05:38:57.325989    8876 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 05:38:57.326080    8876 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 05:38:57.409045    8876 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 05:38:57.413003    8876 out.go:235]   - Booting up control plane ...
	I0816 05:38:57.413051    8876 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 05:38:57.413094    8876 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 05:38:57.413131    8876 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 05:38:57.413174    8876 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 05:38:57.413342    8876 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 05:39:01.915691    8876 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501938 seconds
	I0816 05:39:01.915780    8876 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 05:39:01.920954    8876 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 05:39:02.429208    8876 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 05:39:02.429319    8876 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-972000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 05:39:02.932912    8876 kubeadm.go:310] [bootstrap-token] Using token: nyvah2.rc2dbnw87lmpdpnb
	I0816 05:39:02.936361    8876 out.go:235]   - Configuring RBAC rules ...
	I0816 05:39:02.936460    8876 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 05:39:02.936512    8876 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 05:39:02.954204    8876 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 05:39:02.955040    8876 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 05:39:02.955876    8876 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 05:39:02.956640    8876 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 05:39:02.959925    8876 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 05:39:03.145409    8876 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 05:39:03.337473    8876 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 05:39:03.338436    8876 kubeadm.go:310] 
	I0816 05:39:03.338469    8876 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 05:39:03.338476    8876 kubeadm.go:310] 
	I0816 05:39:03.338519    8876 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 05:39:03.338522    8876 kubeadm.go:310] 
	I0816 05:39:03.338534    8876 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 05:39:03.338638    8876 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 05:39:03.338663    8876 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 05:39:03.338672    8876 kubeadm.go:310] 
	I0816 05:39:03.338704    8876 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 05:39:03.338711    8876 kubeadm.go:310] 
	I0816 05:39:03.338768    8876 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 05:39:03.338774    8876 kubeadm.go:310] 
	I0816 05:39:03.338833    8876 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 05:39:03.338910    8876 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 05:39:03.338977    8876 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 05:39:03.339040    8876 kubeadm.go:310] 
	I0816 05:39:03.339079    8876 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 05:39:03.339163    8876 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 05:39:03.339168    8876 kubeadm.go:310] 
	I0816 05:39:03.339219    8876 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token nyvah2.rc2dbnw87lmpdpnb \
	I0816 05:39:03.339313    8876 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:23cf10825d548a004e2d3ef8e1c65218486081db837b36803636fece4fac457f \
	I0816 05:39:03.339327    8876 kubeadm.go:310] 	--control-plane 
	I0816 05:39:03.339330    8876 kubeadm.go:310] 
	I0816 05:39:03.339472    8876 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 05:39:03.339480    8876 kubeadm.go:310] 
	I0816 05:39:03.339532    8876 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token nyvah2.rc2dbnw87lmpdpnb \
	I0816 05:39:03.339591    8876 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:23cf10825d548a004e2d3ef8e1c65218486081db837b36803636fece4fac457f 
	I0816 05:39:03.339649    8876 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 05:39:03.339666    8876 cni.go:84] Creating CNI manager for ""
	I0816 05:39:03.339675    8876 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 05:39:03.342807    8876 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 05:39:03.348834    8876 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 05:39:03.352877    8876 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 05:39:03.357761    8876 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 05:39:03.357813    8876 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 05:39:03.357829    8876 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-972000 minikube.k8s.io/updated_at=2024_08_16T05_39_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05 minikube.k8s.io/name=stopped-upgrade-972000 minikube.k8s.io/primary=true
	I0816 05:39:03.405808    8876 kubeadm.go:1113] duration metric: took 48.036584ms to wait for elevateKubeSystemPrivileges
	I0816 05:39:03.405838    8876 ops.go:34] apiserver oom_adj: -16
	I0816 05:39:03.405845    8876 kubeadm.go:394] duration metric: took 4m11.794322208s to StartCluster
	I0816 05:39:03.405855    8876 settings.go:142] acquiring lock: {Name:mkec9dae897ed6cd1355cb2ba10161c54c163fba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:39:03.405948    8876 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:39:03.406353    8876 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-6249/kubeconfig: {Name:mka7b2a1dac03f0ea4ac28563b4fe884a2b1b206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:39:03.406551    8876 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:39:03.406594    8876 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 05:39:03.406641    8876 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-972000"
	I0816 05:39:03.406658    8876 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-972000"
	W0816 05:39:03.406661    8876 addons.go:243] addon storage-provisioner should already be in state true
	I0816 05:39:03.406672    8876 host.go:66] Checking if "stopped-upgrade-972000" exists ...
	I0816 05:39:03.406675    8876 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-972000"
	I0816 05:39:03.406697    8876 config.go:182] Loaded profile config "stopped-upgrade-972000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0816 05:39:03.406743    8876 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-972000"
	I0816 05:39:03.407824    8876 kapi.go:59] client config for stopped-upgrade-972000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/stopped-upgrade-972000/client.key", CAFile:"/Users/jenkins/minikube-integration/19423-6249/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101e55610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 05:39:03.407939    8876 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-972000"
	W0816 05:39:03.407943    8876 addons.go:243] addon default-storageclass should already be in state true
	I0816 05:39:03.407949    8876 host.go:66] Checking if "stopped-upgrade-972000" exists ...
	I0816 05:39:03.410809    8876 out.go:177] * Verifying Kubernetes components...
	I0816 05:39:03.411160    8876 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 05:39:03.414987    8876 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 05:39:03.414993    8876 sshutil.go:53] new ssh client: &{IP:localhost Port:51362 SSHKeyPath:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/stopped-upgrade-972000/id_rsa Username:docker}
	I0816 05:39:03.418792    8876 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 05:39:03.422855    8876 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:39:03.424199    8876 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 05:39:03.424205    8876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 05:39:03.424211    8876 sshutil.go:53] new ssh client: &{IP:localhost Port:51362 SSHKeyPath:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/stopped-upgrade-972000/id_rsa Username:docker}
	I0816 05:39:03.502784    8876 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 05:39:03.507594    8876 api_server.go:52] waiting for apiserver process to appear ...
	I0816 05:39:03.507639    8876 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 05:39:03.511209    8876 api_server.go:72] duration metric: took 104.647709ms to wait for apiserver process to appear ...
	I0816 05:39:03.511218    8876 api_server.go:88] waiting for apiserver healthz status ...
	I0816 05:39:03.511226    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:39:03.549327    8876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 05:39:03.565049    8876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 05:39:03.930805    8876 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0816 05:39:03.930819    8876 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0816 05:39:08.513333    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:39:08.513369    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:39:13.513675    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:39:13.513713    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:39:18.514145    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:39:18.514205    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:39:23.514612    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:39:23.514655    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:39:28.515310    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:39:28.515355    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:39:33.516154    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:39:33.516193    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0816 05:39:33.932710    8876 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0816 05:39:33.937608    8876 out.go:177] * Enabled addons: storage-provisioner
	I0816 05:39:33.946479    8876 addons.go:510] duration metric: took 30.540411458s for enable addons: enabled=[storage-provisioner]
	I0816 05:39:38.516776    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:39:38.516859    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:39:43.517959    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:39:43.517990    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:39:48.519933    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:39:48.519967    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:39:53.522050    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:39:53.522088    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:39:58.524268    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:39:58.524317    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:40:03.526517    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:40:03.526680    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:40:03.537742    8876 logs.go:276] 1 containers: [4e872eb61aa8]
	I0816 05:40:03.537828    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:40:03.561005    8876 logs.go:276] 1 containers: [74a999a2e7b5]
	I0816 05:40:03.561092    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:40:03.589710    8876 logs.go:276] 2 containers: [9bdf9243ce65 52901cfd4f78]
	I0816 05:40:03.589784    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:40:03.600476    8876 logs.go:276] 1 containers: [2ef808c94481]
	I0816 05:40:03.600546    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:40:03.611435    8876 logs.go:276] 1 containers: [007984f200be]
	I0816 05:40:03.611514    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:40:03.627649    8876 logs.go:276] 1 containers: [7ef75565b26f]
	I0816 05:40:03.627720    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:40:03.638016    8876 logs.go:276] 0 containers: []
	W0816 05:40:03.638027    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:40:03.638117    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:40:03.649148    8876 logs.go:276] 1 containers: [57d409f82a63]
	I0816 05:40:03.649168    8876 logs.go:123] Gathering logs for kube-proxy [007984f200be] ...
	I0816 05:40:03.649176    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 007984f200be"
	I0816 05:40:03.661159    8876 logs.go:123] Gathering logs for storage-provisioner [57d409f82a63] ...
	I0816 05:40:03.661170    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d409f82a63"
	I0816 05:40:03.677184    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:40:03.677194    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:40:03.702027    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:40:03.702035    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:40:03.706323    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:40:03.706332    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:40:03.741928    8876 logs.go:123] Gathering logs for kube-apiserver [4e872eb61aa8] ...
	I0816 05:40:03.741943    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e872eb61aa8"
	I0816 05:40:03.756733    8876 logs.go:123] Gathering logs for etcd [74a999a2e7b5] ...
	I0816 05:40:03.756743    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a999a2e7b5"
	I0816 05:40:03.771547    8876 logs.go:123] Gathering logs for coredns [9bdf9243ce65] ...
	I0816 05:40:03.771556    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bdf9243ce65"
	I0816 05:40:03.783311    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:40:03.783322    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:40:03.821363    8876 logs.go:123] Gathering logs for coredns [52901cfd4f78] ...
	I0816 05:40:03.821375    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52901cfd4f78"
	I0816 05:40:03.833162    8876 logs.go:123] Gathering logs for kube-scheduler [2ef808c94481] ...
	I0816 05:40:03.833172    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef808c94481"
	I0816 05:40:03.848778    8876 logs.go:123] Gathering logs for kube-controller-manager [7ef75565b26f] ...
	I0816 05:40:03.848789    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ef75565b26f"
	I0816 05:40:03.867517    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:40:03.867529    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:40:06.381414    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:40:11.383735    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:40:11.383852    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:40:11.400448    8876 logs.go:276] 1 containers: [4e872eb61aa8]
	I0816 05:40:11.400517    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:40:11.411082    8876 logs.go:276] 1 containers: [74a999a2e7b5]
	I0816 05:40:11.411147    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:40:11.421772    8876 logs.go:276] 2 containers: [9bdf9243ce65 52901cfd4f78]
	I0816 05:40:11.421841    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:40:11.432113    8876 logs.go:276] 1 containers: [2ef808c94481]
	I0816 05:40:11.432180    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:40:11.442452    8876 logs.go:276] 1 containers: [007984f200be]
	I0816 05:40:11.442523    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:40:11.452807    8876 logs.go:276] 1 containers: [7ef75565b26f]
	I0816 05:40:11.452885    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:40:11.467014    8876 logs.go:276] 0 containers: []
	W0816 05:40:11.467025    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:40:11.467087    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:40:11.480470    8876 logs.go:276] 1 containers: [57d409f82a63]
	I0816 05:40:11.480486    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:40:11.480492    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:40:11.484993    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:40:11.484999    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:40:11.520754    8876 logs.go:123] Gathering logs for kube-apiserver [4e872eb61aa8] ...
	I0816 05:40:11.520766    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e872eb61aa8"
	I0816 05:40:11.535322    8876 logs.go:123] Gathering logs for etcd [74a999a2e7b5] ...
	I0816 05:40:11.535333    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a999a2e7b5"
	I0816 05:40:11.549698    8876 logs.go:123] Gathering logs for coredns [9bdf9243ce65] ...
	I0816 05:40:11.549711    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bdf9243ce65"
	I0816 05:40:11.561163    8876 logs.go:123] Gathering logs for kube-scheduler [2ef808c94481] ...
	I0816 05:40:11.561176    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef808c94481"
	I0816 05:40:11.575752    8876 logs.go:123] Gathering logs for kube-proxy [007984f200be] ...
	I0816 05:40:11.575761    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 007984f200be"
	I0816 05:40:11.587101    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:40:11.587113    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:40:11.624661    8876 logs.go:123] Gathering logs for storage-provisioner [57d409f82a63] ...
	I0816 05:40:11.624668    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d409f82a63"
	I0816 05:40:11.635999    8876 logs.go:123] Gathering logs for kube-controller-manager [7ef75565b26f] ...
	I0816 05:40:11.636010    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ef75565b26f"
	I0816 05:40:11.653173    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:40:11.653185    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:40:11.677601    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:40:11.677607    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:40:11.692082    8876 logs.go:123] Gathering logs for coredns [52901cfd4f78] ...
	I0816 05:40:11.692091    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52901cfd4f78"
	I0816 05:40:14.205209    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:40:19.208026    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:40:19.208492    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:40:19.246850    8876 logs.go:276] 1 containers: [4e872eb61aa8]
	I0816 05:40:19.246980    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:40:19.267076    8876 logs.go:276] 1 containers: [74a999a2e7b5]
	I0816 05:40:19.267192    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:40:19.281881    8876 logs.go:276] 2 containers: [9bdf9243ce65 52901cfd4f78]
	I0816 05:40:19.281983    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:40:19.293539    8876 logs.go:276] 1 containers: [2ef808c94481]
	I0816 05:40:19.293605    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:40:19.305803    8876 logs.go:276] 1 containers: [007984f200be]
	I0816 05:40:19.305877    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:40:19.319800    8876 logs.go:276] 1 containers: [7ef75565b26f]
	I0816 05:40:19.319869    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:40:19.330525    8876 logs.go:276] 0 containers: []
	W0816 05:40:19.330540    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:40:19.330594    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:40:19.340846    8876 logs.go:276] 1 containers: [57d409f82a63]
	I0816 05:40:19.340861    8876 logs.go:123] Gathering logs for etcd [74a999a2e7b5] ...
	I0816 05:40:19.340872    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a999a2e7b5"
	I0816 05:40:19.354761    8876 logs.go:123] Gathering logs for kube-scheduler [2ef808c94481] ...
	I0816 05:40:19.354773    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef808c94481"
	I0816 05:40:19.369788    8876 logs.go:123] Gathering logs for kube-proxy [007984f200be] ...
	I0816 05:40:19.369797    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 007984f200be"
	I0816 05:40:19.381882    8876 logs.go:123] Gathering logs for kube-controller-manager [7ef75565b26f] ...
	I0816 05:40:19.381894    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ef75565b26f"
	I0816 05:40:19.399459    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:40:19.399467    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:40:19.435488    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:40:19.435497    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:40:19.439480    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:40:19.439489    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:40:19.473208    8876 logs.go:123] Gathering logs for kube-apiserver [4e872eb61aa8] ...
	I0816 05:40:19.473220    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e872eb61aa8"
	I0816 05:40:19.489191    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:40:19.489203    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:40:19.514516    8876 logs.go:123] Gathering logs for coredns [9bdf9243ce65] ...
	I0816 05:40:19.514522    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bdf9243ce65"
	I0816 05:40:19.525820    8876 logs.go:123] Gathering logs for coredns [52901cfd4f78] ...
	I0816 05:40:19.525832    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52901cfd4f78"
	I0816 05:40:19.537207    8876 logs.go:123] Gathering logs for storage-provisioner [57d409f82a63] ...
	I0816 05:40:19.537219    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d409f82a63"
	I0816 05:40:19.549579    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:40:19.549590    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:40:22.063603    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:40:27.066283    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:40:27.066883    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:40:27.101207    8876 logs.go:276] 1 containers: [4e872eb61aa8]
	I0816 05:40:27.101339    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:40:27.122295    8876 logs.go:276] 1 containers: [74a999a2e7b5]
	I0816 05:40:27.122388    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:40:27.136904    8876 logs.go:276] 2 containers: [9bdf9243ce65 52901cfd4f78]
	I0816 05:40:27.136981    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:40:27.148708    8876 logs.go:276] 1 containers: [2ef808c94481]
	I0816 05:40:27.148781    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:40:27.159179    8876 logs.go:276] 1 containers: [007984f200be]
	I0816 05:40:27.159245    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:40:27.169958    8876 logs.go:276] 1 containers: [7ef75565b26f]
	I0816 05:40:27.170024    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:40:27.183275    8876 logs.go:276] 0 containers: []
	W0816 05:40:27.183285    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:40:27.183336    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:40:27.195709    8876 logs.go:276] 1 containers: [57d409f82a63]
	I0816 05:40:27.195731    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:40:27.195736    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:40:27.231917    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:40:27.231929    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:40:27.236655    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:40:27.236665    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:40:27.272327    8876 logs.go:123] Gathering logs for kube-apiserver [4e872eb61aa8] ...
	I0816 05:40:27.272341    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e872eb61aa8"
	I0816 05:40:27.286704    8876 logs.go:123] Gathering logs for etcd [74a999a2e7b5] ...
	I0816 05:40:27.286716    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a999a2e7b5"
	I0816 05:40:27.300046    8876 logs.go:123] Gathering logs for kube-scheduler [2ef808c94481] ...
	I0816 05:40:27.300055    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef808c94481"
	I0816 05:40:27.314662    8876 logs.go:123] Gathering logs for storage-provisioner [57d409f82a63] ...
	I0816 05:40:27.314672    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d409f82a63"
	I0816 05:40:27.327057    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:40:27.327070    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:40:27.351775    8876 logs.go:123] Gathering logs for coredns [9bdf9243ce65] ...
	I0816 05:40:27.351786    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bdf9243ce65"
	I0816 05:40:27.363096    8876 logs.go:123] Gathering logs for coredns [52901cfd4f78] ...
	I0816 05:40:27.363110    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52901cfd4f78"
	I0816 05:40:27.374685    8876 logs.go:123] Gathering logs for kube-proxy [007984f200be] ...
	I0816 05:40:27.374697    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 007984f200be"
	I0816 05:40:27.390751    8876 logs.go:123] Gathering logs for kube-controller-manager [7ef75565b26f] ...
	I0816 05:40:27.390760    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ef75565b26f"
	I0816 05:40:27.410754    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:40:27.410763    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:40:29.923720    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:40:34.925388    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:40:34.925889    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:40:34.960886    8876 logs.go:276] 1 containers: [4e872eb61aa8]
	I0816 05:40:34.961019    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:40:34.980062    8876 logs.go:276] 1 containers: [74a999a2e7b5]
	I0816 05:40:34.980157    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:40:34.994331    8876 logs.go:276] 2 containers: [9bdf9243ce65 52901cfd4f78]
	I0816 05:40:34.994406    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:40:35.007105    8876 logs.go:276] 1 containers: [2ef808c94481]
	I0816 05:40:35.007178    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:40:35.017495    8876 logs.go:276] 1 containers: [007984f200be]
	I0816 05:40:35.017570    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:40:35.028019    8876 logs.go:276] 1 containers: [7ef75565b26f]
	I0816 05:40:35.028095    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:40:35.037856    8876 logs.go:276] 0 containers: []
	W0816 05:40:35.037868    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:40:35.037931    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:40:35.051324    8876 logs.go:276] 1 containers: [57d409f82a63]
	I0816 05:40:35.051340    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:40:35.051345    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:40:35.056359    8876 logs.go:123] Gathering logs for kube-scheduler [2ef808c94481] ...
	I0816 05:40:35.056369    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef808c94481"
	I0816 05:40:35.071655    8876 logs.go:123] Gathering logs for kube-controller-manager [7ef75565b26f] ...
	I0816 05:40:35.071665    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ef75565b26f"
	I0816 05:40:35.089598    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:40:35.089609    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:40:35.112744    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:40:35.112755    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:40:35.149771    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:40:35.149779    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:40:35.184388    8876 logs.go:123] Gathering logs for kube-apiserver [4e872eb61aa8] ...
	I0816 05:40:35.184403    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e872eb61aa8"
	I0816 05:40:35.198683    8876 logs.go:123] Gathering logs for etcd [74a999a2e7b5] ...
	I0816 05:40:35.198696    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a999a2e7b5"
	I0816 05:40:35.215312    8876 logs.go:123] Gathering logs for coredns [9bdf9243ce65] ...
	I0816 05:40:35.215325    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bdf9243ce65"
	I0816 05:40:35.226431    8876 logs.go:123] Gathering logs for coredns [52901cfd4f78] ...
	I0816 05:40:35.226444    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52901cfd4f78"
	I0816 05:40:35.238846    8876 logs.go:123] Gathering logs for kube-proxy [007984f200be] ...
	I0816 05:40:35.238860    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 007984f200be"
	I0816 05:40:35.250500    8876 logs.go:123] Gathering logs for storage-provisioner [57d409f82a63] ...
	I0816 05:40:35.250513    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d409f82a63"
	I0816 05:40:35.261922    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:40:35.261934    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:40:37.775023    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:40:42.777786    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:40:42.778667    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:40:42.816164    8876 logs.go:276] 1 containers: [4e872eb61aa8]
	I0816 05:40:42.816296    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:40:42.837521    8876 logs.go:276] 1 containers: [74a999a2e7b5]
	I0816 05:40:42.837634    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:40:42.852500    8876 logs.go:276] 2 containers: [9bdf9243ce65 52901cfd4f78]
	I0816 05:40:42.852587    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:40:42.866980    8876 logs.go:276] 1 containers: [2ef808c94481]
	I0816 05:40:42.867039    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:40:42.878062    8876 logs.go:276] 1 containers: [007984f200be]
	I0816 05:40:42.878135    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:40:42.888714    8876 logs.go:276] 1 containers: [7ef75565b26f]
	I0816 05:40:42.888783    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:40:42.898923    8876 logs.go:276] 0 containers: []
	W0816 05:40:42.898935    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:40:42.898993    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:40:42.909196    8876 logs.go:276] 1 containers: [57d409f82a63]
	I0816 05:40:42.909212    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:40:42.909217    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:40:42.934159    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:40:42.934167    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:40:42.969091    8876 logs.go:123] Gathering logs for kube-apiserver [4e872eb61aa8] ...
	I0816 05:40:42.969102    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e872eb61aa8"
	I0816 05:40:42.984026    8876 logs.go:123] Gathering logs for coredns [9bdf9243ce65] ...
	I0816 05:40:42.984037    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bdf9243ce65"
	I0816 05:40:42.996757    8876 logs.go:123] Gathering logs for kube-scheduler [2ef808c94481] ...
	I0816 05:40:42.996770    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef808c94481"
	I0816 05:40:43.012246    8876 logs.go:123] Gathering logs for kube-proxy [007984f200be] ...
	I0816 05:40:43.012255    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 007984f200be"
	I0816 05:40:43.023788    8876 logs.go:123] Gathering logs for kube-controller-manager [7ef75565b26f] ...
	I0816 05:40:43.023800    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ef75565b26f"
	I0816 05:40:43.041013    8876 logs.go:123] Gathering logs for storage-provisioner [57d409f82a63] ...
	I0816 05:40:43.041024    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d409f82a63"
	I0816 05:40:43.053379    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:40:43.053391    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:40:43.066317    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:40:43.066330    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:40:43.102207    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:40:43.102221    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:40:43.106153    8876 logs.go:123] Gathering logs for etcd [74a999a2e7b5] ...
	I0816 05:40:43.106161    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a999a2e7b5"
	I0816 05:40:43.120182    8876 logs.go:123] Gathering logs for coredns [52901cfd4f78] ...
	I0816 05:40:43.120193    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52901cfd4f78"
	I0816 05:40:45.633903    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:40:50.636479    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:40:50.636860    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:40:50.676960    8876 logs.go:276] 1 containers: [4e872eb61aa8]
	I0816 05:40:50.677098    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:40:50.698497    8876 logs.go:276] 1 containers: [74a999a2e7b5]
	I0816 05:40:50.698605    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:40:50.713808    8876 logs.go:276] 2 containers: [9bdf9243ce65 52901cfd4f78]
	I0816 05:40:50.713896    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:40:50.726210    8876 logs.go:276] 1 containers: [2ef808c94481]
	I0816 05:40:50.726283    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:40:50.737321    8876 logs.go:276] 1 containers: [007984f200be]
	I0816 05:40:50.737393    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:40:50.747466    8876 logs.go:276] 1 containers: [7ef75565b26f]
	I0816 05:40:50.747535    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:40:50.757236    8876 logs.go:276] 0 containers: []
	W0816 05:40:50.757246    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:40:50.757300    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:40:50.767653    8876 logs.go:276] 1 containers: [57d409f82a63]
	I0816 05:40:50.767669    8876 logs.go:123] Gathering logs for etcd [74a999a2e7b5] ...
	I0816 05:40:50.767674    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a999a2e7b5"
	I0816 05:40:50.781505    8876 logs.go:123] Gathering logs for kube-scheduler [2ef808c94481] ...
	I0816 05:40:50.781516    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef808c94481"
	I0816 05:40:50.796604    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:40:50.796614    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:40:50.819593    8876 logs.go:123] Gathering logs for coredns [52901cfd4f78] ...
	I0816 05:40:50.819602    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52901cfd4f78"
	I0816 05:40:50.830638    8876 logs.go:123] Gathering logs for kube-proxy [007984f200be] ...
	I0816 05:40:50.830651    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 007984f200be"
	I0816 05:40:50.842333    8876 logs.go:123] Gathering logs for kube-controller-manager [7ef75565b26f] ...
	I0816 05:40:50.842343    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ef75565b26f"
	I0816 05:40:50.859537    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:40:50.859546    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:40:50.896260    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:40:50.896268    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:40:50.900333    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:40:50.900340    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:40:50.935216    8876 logs.go:123] Gathering logs for kube-apiserver [4e872eb61aa8] ...
	I0816 05:40:50.935228    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e872eb61aa8"
	I0816 05:40:50.963058    8876 logs.go:123] Gathering logs for coredns [9bdf9243ce65] ...
	I0816 05:40:50.963067    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bdf9243ce65"
	I0816 05:40:50.977298    8876 logs.go:123] Gathering logs for storage-provisioner [57d409f82a63] ...
	I0816 05:40:50.977309    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d409f82a63"
	I0816 05:40:50.994130    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:40:50.994144    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:40:53.507239    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:40:58.508019    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:40:58.508335    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:40:58.539099    8876 logs.go:276] 1 containers: [4e872eb61aa8]
	I0816 05:40:58.539216    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:40:58.563096    8876 logs.go:276] 1 containers: [74a999a2e7b5]
	I0816 05:40:58.563179    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:40:58.577013    8876 logs.go:276] 2 containers: [9bdf9243ce65 52901cfd4f78]
	I0816 05:40:58.577083    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:40:58.589231    8876 logs.go:276] 1 containers: [2ef808c94481]
	I0816 05:40:58.589291    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:40:58.599533    8876 logs.go:276] 1 containers: [007984f200be]
	I0816 05:40:58.599610    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:40:58.609853    8876 logs.go:276] 1 containers: [7ef75565b26f]
	I0816 05:40:58.609919    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:40:58.619641    8876 logs.go:276] 0 containers: []
	W0816 05:40:58.619653    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:40:58.619709    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:40:58.635034    8876 logs.go:276] 1 containers: [57d409f82a63]
	I0816 05:40:58.635049    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:40:58.635054    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:40:58.671666    8876 logs.go:123] Gathering logs for etcd [74a999a2e7b5] ...
	I0816 05:40:58.671674    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a999a2e7b5"
	I0816 05:40:58.685524    8876 logs.go:123] Gathering logs for kube-scheduler [2ef808c94481] ...
	I0816 05:40:58.685535    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef808c94481"
	I0816 05:40:58.700355    8876 logs.go:123] Gathering logs for storage-provisioner [57d409f82a63] ...
	I0816 05:40:58.700365    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d409f82a63"
	I0816 05:40:58.711761    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:40:58.711772    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:40:58.735611    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:40:58.735619    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:40:58.747856    8876 logs.go:123] Gathering logs for kube-controller-manager [7ef75565b26f] ...
	I0816 05:40:58.747867    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ef75565b26f"
	I0816 05:40:58.766657    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:40:58.766666    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:40:58.771108    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:40:58.771117    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:40:58.805186    8876 logs.go:123] Gathering logs for kube-apiserver [4e872eb61aa8] ...
	I0816 05:40:58.805198    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e872eb61aa8"
	I0816 05:40:58.819614    8876 logs.go:123] Gathering logs for coredns [9bdf9243ce65] ...
	I0816 05:40:58.819623    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bdf9243ce65"
	I0816 05:40:58.832773    8876 logs.go:123] Gathering logs for coredns [52901cfd4f78] ...
	I0816 05:40:58.832782    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52901cfd4f78"
	I0816 05:40:58.844028    8876 logs.go:123] Gathering logs for kube-proxy [007984f200be] ...
	I0816 05:40:58.844040    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 007984f200be"
	I0816 05:41:01.357704    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:41:06.360060    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:41:06.360276    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:41:06.383337    8876 logs.go:276] 1 containers: [4e872eb61aa8]
	I0816 05:41:06.383457    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:41:06.400669    8876 logs.go:276] 1 containers: [74a999a2e7b5]
	I0816 05:41:06.400754    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:41:06.413692    8876 logs.go:276] 2 containers: [9bdf9243ce65 52901cfd4f78]
	I0816 05:41:06.413760    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:41:06.426309    8876 logs.go:276] 1 containers: [2ef808c94481]
	I0816 05:41:06.426376    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:41:06.437560    8876 logs.go:276] 1 containers: [007984f200be]
	I0816 05:41:06.437648    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:41:06.451421    8876 logs.go:276] 1 containers: [7ef75565b26f]
	I0816 05:41:06.451489    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:41:06.461903    8876 logs.go:276] 0 containers: []
	W0816 05:41:06.461921    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:41:06.461978    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:41:06.472511    8876 logs.go:276] 1 containers: [57d409f82a63]
	I0816 05:41:06.472525    8876 logs.go:123] Gathering logs for kube-proxy [007984f200be] ...
	I0816 05:41:06.472530    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 007984f200be"
	I0816 05:41:06.486523    8876 logs.go:123] Gathering logs for storage-provisioner [57d409f82a63] ...
	I0816 05:41:06.486536    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d409f82a63"
	I0816 05:41:06.497693    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:41:06.497705    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:41:06.522118    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:41:06.522129    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:41:06.533036    8876 logs.go:123] Gathering logs for etcd [74a999a2e7b5] ...
	I0816 05:41:06.533050    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a999a2e7b5"
	I0816 05:41:06.546969    8876 logs.go:123] Gathering logs for coredns [9bdf9243ce65] ...
	I0816 05:41:06.546981    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bdf9243ce65"
	I0816 05:41:06.558470    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:41:06.558483    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:41:06.593136    8876 logs.go:123] Gathering logs for kube-apiserver [4e872eb61aa8] ...
	I0816 05:41:06.593149    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e872eb61aa8"
	I0816 05:41:06.607195    8876 logs.go:123] Gathering logs for coredns [52901cfd4f78] ...
	I0816 05:41:06.607208    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52901cfd4f78"
	I0816 05:41:06.618417    8876 logs.go:123] Gathering logs for kube-scheduler [2ef808c94481] ...
	I0816 05:41:06.618431    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef808c94481"
	I0816 05:41:06.637869    8876 logs.go:123] Gathering logs for kube-controller-manager [7ef75565b26f] ...
	I0816 05:41:06.637880    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ef75565b26f"
	I0816 05:41:06.658104    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:41:06.658115    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:41:06.694030    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:41:06.694039    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:41:09.199609    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:41:14.202402    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:41:14.202903    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:41:14.243014    8876 logs.go:276] 1 containers: [4e872eb61aa8]
	I0816 05:41:14.243130    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:41:14.263102    8876 logs.go:276] 1 containers: [74a999a2e7b5]
	I0816 05:41:14.263194    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:41:14.279022    8876 logs.go:276] 2 containers: [9bdf9243ce65 52901cfd4f78]
	I0816 05:41:14.279095    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:41:14.291923    8876 logs.go:276] 1 containers: [2ef808c94481]
	I0816 05:41:14.291981    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:41:14.303120    8876 logs.go:276] 1 containers: [007984f200be]
	I0816 05:41:14.303180    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:41:14.313362    8876 logs.go:276] 1 containers: [7ef75565b26f]
	I0816 05:41:14.313427    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:41:14.326315    8876 logs.go:276] 0 containers: []
	W0816 05:41:14.326326    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:41:14.326387    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:41:14.337211    8876 logs.go:276] 1 containers: [57d409f82a63]
	I0816 05:41:14.337226    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:41:14.337232    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:41:14.341460    8876 logs.go:123] Gathering logs for kube-apiserver [4e872eb61aa8] ...
	I0816 05:41:14.341466    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e872eb61aa8"
	I0816 05:41:14.355223    8876 logs.go:123] Gathering logs for coredns [9bdf9243ce65] ...
	I0816 05:41:14.355234    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bdf9243ce65"
	I0816 05:41:14.366582    8876 logs.go:123] Gathering logs for kube-controller-manager [7ef75565b26f] ...
	I0816 05:41:14.366592    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ef75565b26f"
	I0816 05:41:14.383997    8876 logs.go:123] Gathering logs for storage-provisioner [57d409f82a63] ...
	I0816 05:41:14.384007    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d409f82a63"
	I0816 05:41:14.395432    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:41:14.395446    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:41:14.420434    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:41:14.420446    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:41:14.477465    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:41:14.477487    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:41:14.537723    8876 logs.go:123] Gathering logs for etcd [74a999a2e7b5] ...
	I0816 05:41:14.537736    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a999a2e7b5"
	I0816 05:41:14.565940    8876 logs.go:123] Gathering logs for coredns [52901cfd4f78] ...
	I0816 05:41:14.565952    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52901cfd4f78"
	I0816 05:41:14.577518    8876 logs.go:123] Gathering logs for kube-scheduler [2ef808c94481] ...
	I0816 05:41:14.577529    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef808c94481"
	I0816 05:41:14.596555    8876 logs.go:123] Gathering logs for kube-proxy [007984f200be] ...
	I0816 05:41:14.596566    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 007984f200be"
	I0816 05:41:14.611076    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:41:14.611089    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:41:17.137775    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:41:22.140529    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:41:22.140939    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:41:22.179503    8876 logs.go:276] 1 containers: [4e872eb61aa8]
	I0816 05:41:22.179625    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:41:22.200738    8876 logs.go:276] 1 containers: [74a999a2e7b5]
	I0816 05:41:22.200843    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:41:22.216263    8876 logs.go:276] 4 containers: [e10f383c6ee0 6e7862f33451 9bdf9243ce65 52901cfd4f78]
	I0816 05:41:22.216343    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:41:22.229314    8876 logs.go:276] 1 containers: [2ef808c94481]
	I0816 05:41:22.229390    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:41:22.239723    8876 logs.go:276] 1 containers: [007984f200be]
	I0816 05:41:22.239786    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:41:22.250311    8876 logs.go:276] 1 containers: [7ef75565b26f]
	I0816 05:41:22.250374    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:41:22.260403    8876 logs.go:276] 0 containers: []
	W0816 05:41:22.260414    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:41:22.260467    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:41:22.271211    8876 logs.go:276] 1 containers: [57d409f82a63]
	I0816 05:41:22.271229    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:41:22.271235    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:41:22.308371    8876 logs.go:123] Gathering logs for coredns [e10f383c6ee0] ...
	I0816 05:41:22.308381    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e10f383c6ee0"
	I0816 05:41:22.319636    8876 logs.go:123] Gathering logs for storage-provisioner [57d409f82a63] ...
	I0816 05:41:22.319647    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d409f82a63"
	I0816 05:41:22.331624    8876 logs.go:123] Gathering logs for kube-scheduler [2ef808c94481] ...
	I0816 05:41:22.331637    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef808c94481"
	I0816 05:41:22.347458    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:41:22.347470    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:41:22.383134    8876 logs.go:123] Gathering logs for kube-apiserver [4e872eb61aa8] ...
	I0816 05:41:22.383146    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e872eb61aa8"
	I0816 05:41:22.397649    8876 logs.go:123] Gathering logs for etcd [74a999a2e7b5] ...
	I0816 05:41:22.397658    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a999a2e7b5"
	I0816 05:41:22.411528    8876 logs.go:123] Gathering logs for coredns [9bdf9243ce65] ...
	I0816 05:41:22.411539    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bdf9243ce65"
	I0816 05:41:22.427903    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:41:22.427913    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:41:22.439887    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:41:22.439899    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:41:22.444470    8876 logs.go:123] Gathering logs for kube-proxy [007984f200be] ...
	I0816 05:41:22.444479    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 007984f200be"
	I0816 05:41:22.462783    8876 logs.go:123] Gathering logs for kube-controller-manager [7ef75565b26f] ...
	I0816 05:41:22.462796    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ef75565b26f"
	I0816 05:41:22.480142    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:41:22.480155    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:41:22.504092    8876 logs.go:123] Gathering logs for coredns [6e7862f33451] ...
	I0816 05:41:22.504100    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e7862f33451"
	I0816 05:41:22.516103    8876 logs.go:123] Gathering logs for coredns [52901cfd4f78] ...
	I0816 05:41:22.516113    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52901cfd4f78"
	I0816 05:41:25.030178    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:41:30.032903    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:41:30.033330    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:41:30.076337    8876 logs.go:276] 1 containers: [4e872eb61aa8]
	I0816 05:41:30.076475    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:41:30.097164    8876 logs.go:276] 1 containers: [74a999a2e7b5]
	I0816 05:41:30.097276    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:41:30.112878    8876 logs.go:276] 4 containers: [e10f383c6ee0 6e7862f33451 9bdf9243ce65 52901cfd4f78]
	I0816 05:41:30.112957    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:41:30.124311    8876 logs.go:276] 1 containers: [2ef808c94481]
	I0816 05:41:30.124383    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:41:30.138624    8876 logs.go:276] 1 containers: [007984f200be]
	I0816 05:41:30.138689    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:41:30.149341    8876 logs.go:276] 1 containers: [7ef75565b26f]
	I0816 05:41:30.149412    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:41:30.165374    8876 logs.go:276] 0 containers: []
	W0816 05:41:30.165389    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:41:30.165446    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:41:30.180262    8876 logs.go:276] 1 containers: [57d409f82a63]
	I0816 05:41:30.180280    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:41:30.180285    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:41:30.184600    8876 logs.go:123] Gathering logs for coredns [e10f383c6ee0] ...
	I0816 05:41:30.184609    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e10f383c6ee0"
	I0816 05:41:30.196123    8876 logs.go:123] Gathering logs for kube-proxy [007984f200be] ...
	I0816 05:41:30.196133    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 007984f200be"
	I0816 05:41:30.208002    8876 logs.go:123] Gathering logs for kube-controller-manager [7ef75565b26f] ...
	I0816 05:41:30.208011    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ef75565b26f"
	I0816 05:41:30.225452    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:41:30.225463    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:41:30.250226    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:41:30.250235    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:41:30.284361    8876 logs.go:123] Gathering logs for kube-apiserver [4e872eb61aa8] ...
	I0816 05:41:30.284372    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e872eb61aa8"
	I0816 05:41:30.305027    8876 logs.go:123] Gathering logs for etcd [74a999a2e7b5] ...
	I0816 05:41:30.305037    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a999a2e7b5"
	I0816 05:41:30.318550    8876 logs.go:123] Gathering logs for storage-provisioner [57d409f82a63] ...
	I0816 05:41:30.318564    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d409f82a63"
	I0816 05:41:30.329965    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:41:30.329973    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:41:30.368107    8876 logs.go:123] Gathering logs for coredns [6e7862f33451] ...
	I0816 05:41:30.368118    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e7862f33451"
	I0816 05:41:30.379445    8876 logs.go:123] Gathering logs for coredns [9bdf9243ce65] ...
	I0816 05:41:30.379457    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bdf9243ce65"
	I0816 05:41:30.390944    8876 logs.go:123] Gathering logs for coredns [52901cfd4f78] ...
	I0816 05:41:30.390956    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52901cfd4f78"
	I0816 05:41:30.402806    8876 logs.go:123] Gathering logs for kube-scheduler [2ef808c94481] ...
	I0816 05:41:30.402815    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef808c94481"
	I0816 05:41:30.424834    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:41:30.424844    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:41:32.940120    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:41:37.941206    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:41:37.941683    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:41:37.980918    8876 logs.go:276] 1 containers: [4e872eb61aa8]
	I0816 05:41:37.981053    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:41:38.003497    8876 logs.go:276] 1 containers: [74a999a2e7b5]
	I0816 05:41:38.003613    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:41:38.019380    8876 logs.go:276] 4 containers: [e10f383c6ee0 6e7862f33451 9bdf9243ce65 52901cfd4f78]
	I0816 05:41:38.019459    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:41:38.031820    8876 logs.go:276] 1 containers: [2ef808c94481]
	I0816 05:41:38.031891    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:41:38.049734    8876 logs.go:276] 1 containers: [007984f200be]
	I0816 05:41:38.049802    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:41:38.062702    8876 logs.go:276] 1 containers: [7ef75565b26f]
	I0816 05:41:38.062774    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:41:38.073509    8876 logs.go:276] 0 containers: []
	W0816 05:41:38.073520    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:41:38.073578    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:41:38.084417    8876 logs.go:276] 1 containers: [57d409f82a63]
	I0816 05:41:38.084434    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:41:38.084440    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:41:38.118873    8876 logs.go:123] Gathering logs for coredns [e10f383c6ee0] ...
	I0816 05:41:38.118885    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e10f383c6ee0"
	I0816 05:41:38.135063    8876 logs.go:123] Gathering logs for coredns [6e7862f33451] ...
	I0816 05:41:38.135072    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e7862f33451"
	I0816 05:41:38.146612    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:41:38.146625    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:41:38.151357    8876 logs.go:123] Gathering logs for kube-apiserver [4e872eb61aa8] ...
	I0816 05:41:38.151367    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e872eb61aa8"
	I0816 05:41:38.168678    8876 logs.go:123] Gathering logs for storage-provisioner [57d409f82a63] ...
	I0816 05:41:38.168688    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d409f82a63"
	I0816 05:41:38.179946    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:41:38.179957    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:41:38.204639    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:41:38.204648    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:41:38.221327    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:41:38.221341    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:41:38.259324    8876 logs.go:123] Gathering logs for coredns [9bdf9243ce65] ...
	I0816 05:41:38.259333    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bdf9243ce65"
	I0816 05:41:38.279271    8876 logs.go:123] Gathering logs for kube-controller-manager [7ef75565b26f] ...
	I0816 05:41:38.279281    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ef75565b26f"
	I0816 05:41:38.296162    8876 logs.go:123] Gathering logs for etcd [74a999a2e7b5] ...
	I0816 05:41:38.296176    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a999a2e7b5"
	I0816 05:41:38.310190    8876 logs.go:123] Gathering logs for kube-scheduler [2ef808c94481] ...
	I0816 05:41:38.310203    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef808c94481"
	I0816 05:41:38.325030    8876 logs.go:123] Gathering logs for kube-proxy [007984f200be] ...
	I0816 05:41:38.325042    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 007984f200be"
	I0816 05:41:38.340448    8876 logs.go:123] Gathering logs for coredns [52901cfd4f78] ...
	I0816 05:41:38.340460    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52901cfd4f78"
	I0816 05:41:40.859105    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:41:45.861335    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:41:45.861398    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:41:45.872574    8876 logs.go:276] 1 containers: [4e872eb61aa8]
	I0816 05:41:45.872631    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:41:45.883829    8876 logs.go:276] 1 containers: [74a999a2e7b5]
	I0816 05:41:45.883896    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:41:45.894928    8876 logs.go:276] 4 containers: [e10f383c6ee0 6e7862f33451 9bdf9243ce65 52901cfd4f78]
	I0816 05:41:45.894993    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:41:45.905966    8876 logs.go:276] 1 containers: [2ef808c94481]
	I0816 05:41:45.906028    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:41:45.916971    8876 logs.go:276] 1 containers: [007984f200be]
	I0816 05:41:45.917036    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:41:45.927819    8876 logs.go:276] 1 containers: [7ef75565b26f]
	I0816 05:41:45.927879    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:41:45.941634    8876 logs.go:276] 0 containers: []
	W0816 05:41:45.941647    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:41:45.941704    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:41:45.953705    8876 logs.go:276] 1 containers: [57d409f82a63]
	I0816 05:41:45.953720    8876 logs.go:123] Gathering logs for kube-controller-manager [7ef75565b26f] ...
	I0816 05:41:45.953725    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ef75565b26f"
	I0816 05:41:45.972327    8876 logs.go:123] Gathering logs for storage-provisioner [57d409f82a63] ...
	I0816 05:41:45.972341    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d409f82a63"
	I0816 05:41:45.988095    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:41:45.988107    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:41:45.999289    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:41:45.999300    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:41:46.003751    8876 logs.go:123] Gathering logs for etcd [74a999a2e7b5] ...
	I0816 05:41:46.003760    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a999a2e7b5"
	I0816 05:41:46.022449    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:41:46.022459    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:41:46.047364    8876 logs.go:123] Gathering logs for kube-proxy [007984f200be] ...
	I0816 05:41:46.047374    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 007984f200be"
	I0816 05:41:46.065205    8876 logs.go:123] Gathering logs for kube-apiserver [4e872eb61aa8] ...
	I0816 05:41:46.065218    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e872eb61aa8"
	I0816 05:41:46.079646    8876 logs.go:123] Gathering logs for coredns [e10f383c6ee0] ...
	I0816 05:41:46.079657    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e10f383c6ee0"
	I0816 05:41:46.090977    8876 logs.go:123] Gathering logs for coredns [6e7862f33451] ...
	I0816 05:41:46.090989    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e7862f33451"
	I0816 05:41:46.102120    8876 logs.go:123] Gathering logs for coredns [52901cfd4f78] ...
	I0816 05:41:46.102134    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52901cfd4f78"
	I0816 05:41:46.113257    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:41:46.113269    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:41:46.151413    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:41:46.151424    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:41:46.188906    8876 logs.go:123] Gathering logs for coredns [9bdf9243ce65] ...
	I0816 05:41:46.188918    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bdf9243ce65"
	I0816 05:41:46.201625    8876 logs.go:123] Gathering logs for kube-scheduler [2ef808c94481] ...
	I0816 05:41:46.201636    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef808c94481"
	I0816 05:41:48.718469    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:41:53.721062    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:41:53.721316    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:41:53.748762    8876 logs.go:276] 1 containers: [4e872eb61aa8]
	I0816 05:41:53.748882    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:41:53.771840    8876 logs.go:276] 1 containers: [74a999a2e7b5]
	I0816 05:41:53.771907    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:41:53.787018    8876 logs.go:276] 4 containers: [e10f383c6ee0 6e7862f33451 9bdf9243ce65 52901cfd4f78]
	I0816 05:41:53.787094    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:41:53.798212    8876 logs.go:276] 1 containers: [2ef808c94481]
	I0816 05:41:53.798277    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:41:53.808972    8876 logs.go:276] 1 containers: [007984f200be]
	I0816 05:41:53.809043    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:41:53.819326    8876 logs.go:276] 1 containers: [7ef75565b26f]
	I0816 05:41:53.819385    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:41:53.829167    8876 logs.go:276] 0 containers: []
	W0816 05:41:53.829178    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:41:53.829241    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:41:53.839542    8876 logs.go:276] 1 containers: [57d409f82a63]
	I0816 05:41:53.839556    8876 logs.go:123] Gathering logs for kube-controller-manager [7ef75565b26f] ...
	I0816 05:41:53.839561    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ef75565b26f"
	I0816 05:41:53.857119    8876 logs.go:123] Gathering logs for kube-scheduler [2ef808c94481] ...
	I0816 05:41:53.857126    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef808c94481"
	I0816 05:41:53.871824    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:41:53.871832    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:41:53.896874    8876 logs.go:123] Gathering logs for coredns [52901cfd4f78] ...
	I0816 05:41:53.896882    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52901cfd4f78"
	I0816 05:41:53.908809    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:41:53.908820    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:41:53.943045    8876 logs.go:123] Gathering logs for coredns [9bdf9243ce65] ...
	I0816 05:41:53.943056    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bdf9243ce65"
	I0816 05:41:53.959465    8876 logs.go:123] Gathering logs for kube-proxy [007984f200be] ...
	I0816 05:41:53.959479    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 007984f200be"
	I0816 05:41:53.971402    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:41:53.971412    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:41:53.975753    8876 logs.go:123] Gathering logs for kube-apiserver [4e872eb61aa8] ...
	I0816 05:41:53.975764    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e872eb61aa8"
	I0816 05:41:53.990230    8876 logs.go:123] Gathering logs for etcd [74a999a2e7b5] ...
	I0816 05:41:53.990240    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a999a2e7b5"
	I0816 05:41:54.004169    8876 logs.go:123] Gathering logs for coredns [e10f383c6ee0] ...
	I0816 05:41:54.004179    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e10f383c6ee0"
	I0816 05:41:54.015814    8876 logs.go:123] Gathering logs for coredns [6e7862f33451] ...
	I0816 05:41:54.015824    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e7862f33451"
	I0816 05:41:54.028741    8876 logs.go:123] Gathering logs for storage-provisioner [57d409f82a63] ...
	I0816 05:41:54.028751    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d409f82a63"
	I0816 05:41:54.040467    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:41:54.040480    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:41:54.058923    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:41:54.058935    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:41:56.599526    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:42:01.601640    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:42:01.601788    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:42:01.616403    8876 logs.go:276] 1 containers: [4e872eb61aa8]
	I0816 05:42:01.616484    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:42:01.628770    8876 logs.go:276] 1 containers: [74a999a2e7b5]
	I0816 05:42:01.628839    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:42:01.639588    8876 logs.go:276] 4 containers: [e10f383c6ee0 6e7862f33451 9bdf9243ce65 52901cfd4f78]
	I0816 05:42:01.639656    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:42:01.649973    8876 logs.go:276] 1 containers: [2ef808c94481]
	I0816 05:42:01.650040    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:42:01.660342    8876 logs.go:276] 1 containers: [007984f200be]
	I0816 05:42:01.660404    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:42:01.670800    8876 logs.go:276] 1 containers: [7ef75565b26f]
	I0816 05:42:01.670868    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:42:01.680960    8876 logs.go:276] 0 containers: []
	W0816 05:42:01.680972    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:42:01.681030    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:42:01.691176    8876 logs.go:276] 1 containers: [57d409f82a63]
	I0816 05:42:01.691194    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:42:01.691199    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:42:01.695582    8876 logs.go:123] Gathering logs for coredns [e10f383c6ee0] ...
	I0816 05:42:01.695591    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e10f383c6ee0"
	I0816 05:42:01.706870    8876 logs.go:123] Gathering logs for coredns [6e7862f33451] ...
	I0816 05:42:01.706882    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e7862f33451"
	I0816 05:42:01.718293    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:42:01.718303    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:42:01.743500    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:42:01.743511    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:42:01.755357    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:42:01.755374    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:42:01.794071    8876 logs.go:123] Gathering logs for kube-apiserver [4e872eb61aa8] ...
	I0816 05:42:01.794082    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e872eb61aa8"
	I0816 05:42:01.808051    8876 logs.go:123] Gathering logs for etcd [74a999a2e7b5] ...
	I0816 05:42:01.808060    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a999a2e7b5"
	I0816 05:42:01.821959    8876 logs.go:123] Gathering logs for kube-proxy [007984f200be] ...
	I0816 05:42:01.821969    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 007984f200be"
	I0816 05:42:01.833459    8876 logs.go:123] Gathering logs for storage-provisioner [57d409f82a63] ...
	I0816 05:42:01.833468    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d409f82a63"
	I0816 05:42:01.848079    8876 logs.go:123] Gathering logs for coredns [9bdf9243ce65] ...
	I0816 05:42:01.848088    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bdf9243ce65"
	I0816 05:42:01.861228    8876 logs.go:123] Gathering logs for kube-scheduler [2ef808c94481] ...
	I0816 05:42:01.861243    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef808c94481"
	I0816 05:42:01.875843    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:42:01.875853    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:42:01.917608    8876 logs.go:123] Gathering logs for coredns [52901cfd4f78] ...
	I0816 05:42:01.917617    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52901cfd4f78"
	I0816 05:42:01.929002    8876 logs.go:123] Gathering logs for kube-controller-manager [7ef75565b26f] ...
	I0816 05:42:01.929010    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ef75565b26f"
	I0816 05:42:04.452850    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:42:09.453778    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:42:09.453868    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:42:09.468599    8876 logs.go:276] 1 containers: [4e872eb61aa8]
	I0816 05:42:09.468655    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:42:09.484153    8876 logs.go:276] 1 containers: [74a999a2e7b5]
	I0816 05:42:09.484206    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:42:09.496177    8876 logs.go:276] 4 containers: [e10f383c6ee0 6e7862f33451 9bdf9243ce65 52901cfd4f78]
	I0816 05:42:09.496248    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:42:09.508154    8876 logs.go:276] 1 containers: [2ef808c94481]
	I0816 05:42:09.508219    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:42:09.520547    8876 logs.go:276] 1 containers: [007984f200be]
	I0816 05:42:09.520598    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:42:09.530980    8876 logs.go:276] 1 containers: [7ef75565b26f]
	I0816 05:42:09.531035    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:42:09.544129    8876 logs.go:276] 0 containers: []
	W0816 05:42:09.544142    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:42:09.544185    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:42:09.556375    8876 logs.go:276] 1 containers: [57d409f82a63]
	I0816 05:42:09.556394    8876 logs.go:123] Gathering logs for etcd [74a999a2e7b5] ...
	I0816 05:42:09.556399    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a999a2e7b5"
	I0816 05:42:09.575908    8876 logs.go:123] Gathering logs for coredns [52901cfd4f78] ...
	I0816 05:42:09.575917    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52901cfd4f78"
	I0816 05:42:09.588543    8876 logs.go:123] Gathering logs for kube-proxy [007984f200be] ...
	I0816 05:42:09.588554    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 007984f200be"
	I0816 05:42:09.605037    8876 logs.go:123] Gathering logs for kube-controller-manager [7ef75565b26f] ...
	I0816 05:42:09.605048    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ef75565b26f"
	I0816 05:42:09.623547    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:42:09.623558    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:42:09.628304    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:42:09.628314    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:42:09.664500    8876 logs.go:123] Gathering logs for kube-apiserver [4e872eb61aa8] ...
	I0816 05:42:09.664508    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e872eb61aa8"
	I0816 05:42:09.679092    8876 logs.go:123] Gathering logs for coredns [e10f383c6ee0] ...
	I0816 05:42:09.679105    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e10f383c6ee0"
	I0816 05:42:09.692693    8876 logs.go:123] Gathering logs for coredns [6e7862f33451] ...
	I0816 05:42:09.692708    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e7862f33451"
	I0816 05:42:09.704669    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:42:09.704677    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:42:09.729047    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:42:09.729059    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:42:09.742646    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:42:09.742657    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:42:09.779806    8876 logs.go:123] Gathering logs for coredns [9bdf9243ce65] ...
	I0816 05:42:09.779819    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bdf9243ce65"
	I0816 05:42:09.792315    8876 logs.go:123] Gathering logs for kube-scheduler [2ef808c94481] ...
	I0816 05:42:09.792324    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef808c94481"
	I0816 05:42:09.808429    8876 logs.go:123] Gathering logs for storage-provisioner [57d409f82a63] ...
	I0816 05:42:09.808443    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d409f82a63"
	I0816 05:42:12.324905    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:42:17.327675    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:42:17.327875    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:42:17.349871    8876 logs.go:276] 1 containers: [4e872eb61aa8]
	I0816 05:42:17.349977    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:42:17.364752    8876 logs.go:276] 1 containers: [74a999a2e7b5]
	I0816 05:42:17.364830    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:42:17.377464    8876 logs.go:276] 4 containers: [e10f383c6ee0 6e7862f33451 9bdf9243ce65 52901cfd4f78]
	I0816 05:42:17.377533    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:42:17.389602    8876 logs.go:276] 1 containers: [2ef808c94481]
	I0816 05:42:17.389674    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:42:17.400305    8876 logs.go:276] 1 containers: [007984f200be]
	I0816 05:42:17.400369    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:42:17.411172    8876 logs.go:276] 1 containers: [7ef75565b26f]
	I0816 05:42:17.411232    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:42:17.420987    8876 logs.go:276] 0 containers: []
	W0816 05:42:17.420998    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:42:17.421050    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:42:17.431857    8876 logs.go:276] 1 containers: [57d409f82a63]
	I0816 05:42:17.431875    8876 logs.go:123] Gathering logs for etcd [74a999a2e7b5] ...
	I0816 05:42:17.431881    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a999a2e7b5"
	I0816 05:42:17.445364    8876 logs.go:123] Gathering logs for kube-controller-manager [7ef75565b26f] ...
	I0816 05:42:17.445375    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ef75565b26f"
	I0816 05:42:17.470730    8876 logs.go:123] Gathering logs for kube-apiserver [4e872eb61aa8] ...
	I0816 05:42:17.470741    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e872eb61aa8"
	I0816 05:42:17.484950    8876 logs.go:123] Gathering logs for coredns [e10f383c6ee0] ...
	I0816 05:42:17.484962    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e10f383c6ee0"
	I0816 05:42:17.496342    8876 logs.go:123] Gathering logs for storage-provisioner [57d409f82a63] ...
	I0816 05:42:17.496353    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d409f82a63"
	I0816 05:42:17.507719    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:42:17.507730    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:42:17.511690    8876 logs.go:123] Gathering logs for coredns [52901cfd4f78] ...
	I0816 05:42:17.511700    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52901cfd4f78"
	I0816 05:42:17.523827    8876 logs.go:123] Gathering logs for kube-scheduler [2ef808c94481] ...
	I0816 05:42:17.523840    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef808c94481"
	I0816 05:42:17.538789    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:42:17.538802    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:42:17.562210    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:42:17.562221    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:42:17.600198    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:42:17.600219    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:42:17.641513    8876 logs.go:123] Gathering logs for coredns [6e7862f33451] ...
	I0816 05:42:17.641529    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e7862f33451"
	I0816 05:42:17.654901    8876 logs.go:123] Gathering logs for coredns [9bdf9243ce65] ...
	I0816 05:42:17.654913    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bdf9243ce65"
	I0816 05:42:17.668013    8876 logs.go:123] Gathering logs for kube-proxy [007984f200be] ...
	I0816 05:42:17.668026    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 007984f200be"
	I0816 05:42:17.681972    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:42:17.681985    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:42:20.197349    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:42:25.200107    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:42:25.200693    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:42:25.255258    8876 logs.go:276] 1 containers: [4e872eb61aa8]
	I0816 05:42:25.255379    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:42:25.272393    8876 logs.go:276] 1 containers: [74a999a2e7b5]
	I0816 05:42:25.272477    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:42:25.288894    8876 logs.go:276] 4 containers: [e10f383c6ee0 6e7862f33451 9bdf9243ce65 52901cfd4f78]
	I0816 05:42:25.288967    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:42:25.300067    8876 logs.go:276] 1 containers: [2ef808c94481]
	I0816 05:42:25.300141    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:42:25.310852    8876 logs.go:276] 1 containers: [007984f200be]
	I0816 05:42:25.310919    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:42:25.321145    8876 logs.go:276] 1 containers: [7ef75565b26f]
	I0816 05:42:25.321214    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:42:25.335650    8876 logs.go:276] 0 containers: []
	W0816 05:42:25.335661    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:42:25.335722    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:42:25.349251    8876 logs.go:276] 1 containers: [57d409f82a63]
	I0816 05:42:25.349266    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:42:25.349271    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:42:25.384771    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:42:25.384785    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:42:25.408723    8876 logs.go:123] Gathering logs for storage-provisioner [57d409f82a63] ...
	I0816 05:42:25.408731    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d409f82a63"
	I0816 05:42:25.421590    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:42:25.421602    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:42:25.433641    8876 logs.go:123] Gathering logs for coredns [e10f383c6ee0] ...
	I0816 05:42:25.433652    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e10f383c6ee0"
	I0816 05:42:25.445985    8876 logs.go:123] Gathering logs for kube-controller-manager [7ef75565b26f] ...
	I0816 05:42:25.445995    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ef75565b26f"
	I0816 05:42:25.463708    8876 logs.go:123] Gathering logs for coredns [6e7862f33451] ...
	I0816 05:42:25.463716    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e7862f33451"
	I0816 05:42:25.474781    8876 logs.go:123] Gathering logs for kube-scheduler [2ef808c94481] ...
	I0816 05:42:25.474794    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef808c94481"
	I0816 05:42:25.489553    8876 logs.go:123] Gathering logs for kube-proxy [007984f200be] ...
	I0816 05:42:25.489566    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 007984f200be"
	I0816 05:42:25.501604    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:42:25.501616    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:42:25.539892    8876 logs.go:123] Gathering logs for etcd [74a999a2e7b5] ...
	I0816 05:42:25.539902    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a999a2e7b5"
	I0816 05:42:25.553878    8876 logs.go:123] Gathering logs for coredns [9bdf9243ce65] ...
	I0816 05:42:25.553887    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bdf9243ce65"
	I0816 05:42:25.565616    8876 logs.go:123] Gathering logs for coredns [52901cfd4f78] ...
	I0816 05:42:25.565627    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52901cfd4f78"
	I0816 05:42:25.577059    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:42:25.577070    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:42:25.581181    8876 logs.go:123] Gathering logs for kube-apiserver [4e872eb61aa8] ...
	I0816 05:42:25.581188    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e872eb61aa8"
	I0816 05:42:28.097208    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:42:33.099762    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:42:33.099860    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:42:33.111163    8876 logs.go:276] 1 containers: [4e872eb61aa8]
	I0816 05:42:33.111236    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:42:33.123575    8876 logs.go:276] 1 containers: [74a999a2e7b5]
	I0816 05:42:33.123639    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:42:33.135344    8876 logs.go:276] 4 containers: [e10f383c6ee0 6e7862f33451 9bdf9243ce65 52901cfd4f78]
	I0816 05:42:33.135398    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:42:33.146165    8876 logs.go:276] 1 containers: [2ef808c94481]
	I0816 05:42:33.146221    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:42:33.157847    8876 logs.go:276] 1 containers: [007984f200be]
	I0816 05:42:33.157898    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:42:33.169957    8876 logs.go:276] 1 containers: [7ef75565b26f]
	I0816 05:42:33.170036    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:42:33.181197    8876 logs.go:276] 0 containers: []
	W0816 05:42:33.181206    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:42:33.181246    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:42:33.192020    8876 logs.go:276] 1 containers: [57d409f82a63]
	I0816 05:42:33.192037    8876 logs.go:123] Gathering logs for coredns [9bdf9243ce65] ...
	I0816 05:42:33.192043    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bdf9243ce65"
	I0816 05:42:33.205344    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:42:33.205354    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:42:33.220283    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:42:33.220296    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:42:33.259890    8876 logs.go:123] Gathering logs for etcd [74a999a2e7b5] ...
	I0816 05:42:33.259907    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a999a2e7b5"
	I0816 05:42:33.274984    8876 logs.go:123] Gathering logs for coredns [e10f383c6ee0] ...
	I0816 05:42:33.274998    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e10f383c6ee0"
	I0816 05:42:33.291125    8876 logs.go:123] Gathering logs for kube-proxy [007984f200be] ...
	I0816 05:42:33.291137    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 007984f200be"
	I0816 05:42:33.304463    8876 logs.go:123] Gathering logs for storage-provisioner [57d409f82a63] ...
	I0816 05:42:33.304473    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d409f82a63"
	I0816 05:42:33.315954    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:42:33.315964    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:42:33.320472    8876 logs.go:123] Gathering logs for kube-apiserver [4e872eb61aa8] ...
	I0816 05:42:33.320483    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e872eb61aa8"
	I0816 05:42:33.335379    8876 logs.go:123] Gathering logs for coredns [52901cfd4f78] ...
	I0816 05:42:33.335390    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52901cfd4f78"
	I0816 05:42:33.348616    8876 logs.go:123] Gathering logs for kube-controller-manager [7ef75565b26f] ...
	I0816 05:42:33.348628    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ef75565b26f"
	I0816 05:42:33.367046    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:42:33.367059    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:42:33.410174    8876 logs.go:123] Gathering logs for coredns [6e7862f33451] ...
	I0816 05:42:33.410184    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e7862f33451"
	I0816 05:42:33.424548    8876 logs.go:123] Gathering logs for kube-scheduler [2ef808c94481] ...
	I0816 05:42:33.424560    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef808c94481"
	I0816 05:42:33.440137    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:42:33.440146    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:42:35.966592    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:42:40.968314    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:42:40.968766    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:42:41.009468    8876 logs.go:276] 1 containers: [4e872eb61aa8]
	I0816 05:42:41.009598    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:42:41.032705    8876 logs.go:276] 1 containers: [74a999a2e7b5]
	I0816 05:42:41.032823    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:42:41.049009    8876 logs.go:276] 4 containers: [e10f383c6ee0 6e7862f33451 9bdf9243ce65 52901cfd4f78]
	I0816 05:42:41.049094    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:42:41.063460    8876 logs.go:276] 1 containers: [2ef808c94481]
	I0816 05:42:41.063531    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:42:41.078339    8876 logs.go:276] 1 containers: [007984f200be]
	I0816 05:42:41.078414    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:42:41.094574    8876 logs.go:276] 1 containers: [7ef75565b26f]
	I0816 05:42:41.094647    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:42:41.104431    8876 logs.go:276] 0 containers: []
	W0816 05:42:41.104443    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:42:41.104495    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:42:41.115004    8876 logs.go:276] 1 containers: [57d409f82a63]
	I0816 05:42:41.115021    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:42:41.115029    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:42:41.148674    8876 logs.go:123] Gathering logs for kube-apiserver [4e872eb61aa8] ...
	I0816 05:42:41.148687    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e872eb61aa8"
	I0816 05:42:41.163563    8876 logs.go:123] Gathering logs for coredns [6e7862f33451] ...
	I0816 05:42:41.163577    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e7862f33451"
	I0816 05:42:41.176091    8876 logs.go:123] Gathering logs for coredns [9bdf9243ce65] ...
	I0816 05:42:41.176106    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bdf9243ce65"
	I0816 05:42:41.188060    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:42:41.188070    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:42:41.211787    8876 logs.go:123] Gathering logs for kube-controller-manager [7ef75565b26f] ...
	I0816 05:42:41.211796    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ef75565b26f"
	I0816 05:42:41.228946    8876 logs.go:123] Gathering logs for storage-provisioner [57d409f82a63] ...
	I0816 05:42:41.228959    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d409f82a63"
	I0816 05:42:41.245939    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:42:41.245952    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:42:41.277578    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:42:41.277590    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:42:41.315326    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:42:41.315336    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:42:41.319435    8876 logs.go:123] Gathering logs for kube-scheduler [2ef808c94481] ...
	I0816 05:42:41.319445    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef808c94481"
	I0816 05:42:41.334526    8876 logs.go:123] Gathering logs for kube-proxy [007984f200be] ...
	I0816 05:42:41.334539    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 007984f200be"
	I0816 05:42:41.346095    8876 logs.go:123] Gathering logs for etcd [74a999a2e7b5] ...
	I0816 05:42:41.346104    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a999a2e7b5"
	I0816 05:42:41.359872    8876 logs.go:123] Gathering logs for coredns [e10f383c6ee0] ...
	I0816 05:42:41.359883    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e10f383c6ee0"
	I0816 05:42:41.371248    8876 logs.go:123] Gathering logs for coredns [52901cfd4f78] ...
	I0816 05:42:41.371258    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52901cfd4f78"
	I0816 05:42:43.882562    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:42:48.884776    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:42:48.885258    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:42:48.923649    8876 logs.go:276] 1 containers: [4e872eb61aa8]
	I0816 05:42:48.923776    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:42:48.945076    8876 logs.go:276] 1 containers: [74a999a2e7b5]
	I0816 05:42:48.945190    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:42:48.961460    8876 logs.go:276] 4 containers: [e10f383c6ee0 6e7862f33451 9bdf9243ce65 52901cfd4f78]
	I0816 05:42:48.961538    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:42:48.973923    8876 logs.go:276] 1 containers: [2ef808c94481]
	I0816 05:42:48.973993    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:42:48.984497    8876 logs.go:276] 1 containers: [007984f200be]
	I0816 05:42:48.984781    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:42:48.997074    8876 logs.go:276] 1 containers: [7ef75565b26f]
	I0816 05:42:48.997148    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:42:49.007438    8876 logs.go:276] 0 containers: []
	W0816 05:42:49.007448    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:42:49.007505    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:42:49.018043    8876 logs.go:276] 1 containers: [57d409f82a63]
	I0816 05:42:49.018059    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:42:49.018064    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:42:49.040720    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:42:49.040727    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:42:49.078804    8876 logs.go:123] Gathering logs for kube-apiserver [4e872eb61aa8] ...
	I0816 05:42:49.078814    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e872eb61aa8"
	I0816 05:42:49.100744    8876 logs.go:123] Gathering logs for coredns [52901cfd4f78] ...
	I0816 05:42:49.100756    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52901cfd4f78"
	I0816 05:42:49.111863    8876 logs.go:123] Gathering logs for kube-controller-manager [7ef75565b26f] ...
	I0816 05:42:49.111879    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ef75565b26f"
	I0816 05:42:49.129998    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:42:49.130008    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:42:49.164444    8876 logs.go:123] Gathering logs for etcd [74a999a2e7b5] ...
	I0816 05:42:49.164454    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a999a2e7b5"
	I0816 05:42:49.178664    8876 logs.go:123] Gathering logs for coredns [6e7862f33451] ...
	I0816 05:42:49.178675    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e7862f33451"
	I0816 05:42:49.190326    8876 logs.go:123] Gathering logs for storage-provisioner [57d409f82a63] ...
	I0816 05:42:49.190336    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d409f82a63"
	I0816 05:42:49.202068    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:42:49.202081    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:42:49.206207    8876 logs.go:123] Gathering logs for coredns [9bdf9243ce65] ...
	I0816 05:42:49.206216    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bdf9243ce65"
	I0816 05:42:49.219120    8876 logs.go:123] Gathering logs for kube-proxy [007984f200be] ...
	I0816 05:42:49.219133    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 007984f200be"
	I0816 05:42:49.232999    8876 logs.go:123] Gathering logs for coredns [e10f383c6ee0] ...
	I0816 05:42:49.233011    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e10f383c6ee0"
	I0816 05:42:49.244544    8876 logs.go:123] Gathering logs for kube-scheduler [2ef808c94481] ...
	I0816 05:42:49.244559    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef808c94481"
	I0816 05:42:49.259355    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:42:49.259369    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:42:51.773481    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:42:56.774148    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:42:56.774215    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 05:42:56.785663    8876 logs.go:276] 1 containers: [4e872eb61aa8]
	I0816 05:42:56.785731    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 05:42:56.797670    8876 logs.go:276] 1 containers: [74a999a2e7b5]
	I0816 05:42:56.797749    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 05:42:56.809573    8876 logs.go:276] 4 containers: [e10f383c6ee0 6e7862f33451 9bdf9243ce65 52901cfd4f78]
	I0816 05:42:56.809627    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 05:42:56.819820    8876 logs.go:276] 1 containers: [2ef808c94481]
	I0816 05:42:56.819879    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 05:42:56.830829    8876 logs.go:276] 1 containers: [007984f200be]
	I0816 05:42:56.830894    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 05:42:56.842751    8876 logs.go:276] 1 containers: [7ef75565b26f]
	I0816 05:42:56.842835    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 05:42:56.854445    8876 logs.go:276] 0 containers: []
	W0816 05:42:56.854455    8876 logs.go:278] No container was found matching "kindnet"
	I0816 05:42:56.854501    8876 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 05:42:56.865967    8876 logs.go:276] 1 containers: [57d409f82a63]
	I0816 05:42:56.865983    8876 logs.go:123] Gathering logs for coredns [9bdf9243ce65] ...
	I0816 05:42:56.865990    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bdf9243ce65"
	I0816 05:42:56.879143    8876 logs.go:123] Gathering logs for Docker ...
	I0816 05:42:56.879153    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 05:42:56.904927    8876 logs.go:123] Gathering logs for container status ...
	I0816 05:42:56.904939    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 05:42:56.916719    8876 logs.go:123] Gathering logs for dmesg ...
	I0816 05:42:56.916731    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 05:42:56.921262    8876 logs.go:123] Gathering logs for describe nodes ...
	I0816 05:42:56.921272    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 05:42:56.957985    8876 logs.go:123] Gathering logs for coredns [52901cfd4f78] ...
	I0816 05:42:56.957997    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52901cfd4f78"
	I0816 05:42:56.971607    8876 logs.go:123] Gathering logs for kube-scheduler [2ef808c94481] ...
	I0816 05:42:56.971618    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef808c94481"
	I0816 05:42:56.988214    8876 logs.go:123] Gathering logs for kube-proxy [007984f200be] ...
	I0816 05:42:56.988222    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 007984f200be"
	I0816 05:42:57.004209    8876 logs.go:123] Gathering logs for kubelet ...
	I0816 05:42:57.004224    8876 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 05:42:57.044122    8876 logs.go:123] Gathering logs for coredns [e10f383c6ee0] ...
	I0816 05:42:57.044143    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e10f383c6ee0"
	I0816 05:42:57.058500    8876 logs.go:123] Gathering logs for kube-controller-manager [7ef75565b26f] ...
	I0816 05:42:57.058514    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ef75565b26f"
	I0816 05:42:57.077359    8876 logs.go:123] Gathering logs for storage-provisioner [57d409f82a63] ...
	I0816 05:42:57.077379    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d409f82a63"
	I0816 05:42:57.090228    8876 logs.go:123] Gathering logs for kube-apiserver [4e872eb61aa8] ...
	I0816 05:42:57.090239    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e872eb61aa8"
	I0816 05:42:57.105886    8876 logs.go:123] Gathering logs for coredns [6e7862f33451] ...
	I0816 05:42:57.105897    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e7862f33451"
	I0816 05:42:57.118488    8876 logs.go:123] Gathering logs for etcd [74a999a2e7b5] ...
	I0816 05:42:57.118498    8876 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a999a2e7b5"
	I0816 05:42:59.636499    8876 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 05:43:04.638834    8876 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 05:43:04.643425    8876 out.go:201] 
	W0816 05:43:04.647351    8876 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0816 05:43:04.647361    8876 out.go:270] * 
	* 
	W0816 05:43:04.648058    8876 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:43:04.662319    8876 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-972000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (574.58s)

                                                
                                    
x
+
TestPause/serial/Start (9.92s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-864000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-864000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.856291584s)

                                                
                                                
-- stdout --
	* [pause-864000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-864000" primary control-plane node in "pause-864000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-864000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-864000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-864000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-864000 -n pause-864000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-864000 -n pause-864000: exit status 7 (66.897791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-864000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (10.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-763000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-763000 --driver=qemu2 : exit status 80 (9.999390458s)

                                                
                                                
-- stdout --
	* [NoKubernetes-763000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-763000" primary control-plane node in "NoKubernetes-763000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-763000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-763000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-763000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-763000 -n NoKubernetes-763000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-763000 -n NoKubernetes-763000: exit status 7 (34.681709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-763000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (10.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-763000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-763000 --no-kubernetes --driver=qemu2 : exit status 80 (5.250371958s)

                                                
                                                
-- stdout --
	* [NoKubernetes-763000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-763000
	* Restarting existing qemu2 VM for "NoKubernetes-763000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-763000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-763000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-763000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-763000 -n NoKubernetes-763000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-763000 -n NoKubernetes-763000: exit status 7 (59.826584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-763000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-763000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-763000 --no-kubernetes --driver=qemu2 : exit status 80 (5.239009167s)

                                                
                                                
-- stdout --
	* [NoKubernetes-763000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-763000
	* Restarting existing qemu2 VM for "NoKubernetes-763000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-763000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-763000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-763000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-763000 -n NoKubernetes-763000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-763000 -n NoKubernetes-763000: exit status 7 (48.280667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-763000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-763000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-763000 --driver=qemu2 : exit status 80 (5.270594583s)

                                                
                                                
-- stdout --
	* [NoKubernetes-763000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-763000
	* Restarting existing qemu2 VM for "NoKubernetes-763000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-763000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-763000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-763000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-763000 -n NoKubernetes-763000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-763000 -n NoKubernetes-763000: exit status 7 (38.445417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-763000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-998000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-998000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.804213875s)

                                                
                                                
-- stdout --
	* [auto-998000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-998000" primary control-plane node in "auto-998000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-998000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:41:21.397062    9052 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:41:21.397208    9052 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:41:21.397211    9052 out.go:358] Setting ErrFile to fd 2...
	I0816 05:41:21.397213    9052 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:41:21.397340    9052 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:41:21.398431    9052 out.go:352] Setting JSON to false
	I0816 05:41:21.415209    9052 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6050,"bootTime":1723806031,"procs":503,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0816 05:41:21.415274    9052 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:41:21.421268    9052 out.go:177] * [auto-998000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 05:41:21.429235    9052 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:41:21.429303    9052 notify.go:220] Checking for updates...
	I0816 05:41:21.437139    9052 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:41:21.440184    9052 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 05:41:21.444170    9052 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:41:21.447208    9052 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	I0816 05:41:21.450269    9052 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 05:41:21.453570    9052 config.go:182] Loaded profile config "multinode-569000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:41:21.453632    9052 config.go:182] Loaded profile config "stopped-upgrade-972000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0816 05:41:21.453687    9052 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:41:21.458184    9052 out.go:177] * Using the qemu2 driver based on user configuration
	I0816 05:41:21.465155    9052 start.go:297] selected driver: qemu2
	I0816 05:41:21.465162    9052 start.go:901] validating driver "qemu2" against <nil>
	I0816 05:41:21.465167    9052 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:41:21.467522    9052 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 05:41:21.472151    9052 out.go:177] * Automatically selected the socket_vmnet network
	I0816 05:41:21.475225    9052 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 05:41:21.475257    9052 cni.go:84] Creating CNI manager for ""
	I0816 05:41:21.475264    9052 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 05:41:21.475271    9052 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 05:41:21.475298    9052 start.go:340] cluster config:
	{Name:auto-998000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:auto-998000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:41:21.479229    9052 iso.go:125] acquiring lock: {Name:mkee7fdae783c25a15c40888f5bdc01a171155d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:41:21.487162    9052 out.go:177] * Starting "auto-998000" primary control-plane node in "auto-998000" cluster
	I0816 05:41:21.491164    9052 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:41:21.491178    9052 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 05:41:21.491185    9052 cache.go:56] Caching tarball of preloaded images
	I0816 05:41:21.491240    9052 preload.go:172] Found /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 05:41:21.491245    9052 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 05:41:21.491304    9052 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/auto-998000/config.json ...
	I0816 05:41:21.491314    9052 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/auto-998000/config.json: {Name:mkeef5df2204e82db762271880366745f5fd6785 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:41:21.491631    9052 start.go:360] acquireMachinesLock for auto-998000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:41:21.491660    9052 start.go:364] duration metric: took 24.375µs to acquireMachinesLock for "auto-998000"
	I0816 05:41:21.491673    9052 start.go:93] Provisioning new machine with config: &{Name:auto-998000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:auto-998000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:41:21.491723    9052 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:41:21.498096    9052 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0816 05:41:21.513271    9052 start.go:159] libmachine.API.Create for "auto-998000" (driver="qemu2")
	I0816 05:41:21.513301    9052 client.go:168] LocalClient.Create starting
	I0816 05:41:21.513378    9052 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:41:21.513410    9052 main.go:141] libmachine: Decoding PEM data...
	I0816 05:41:21.513420    9052 main.go:141] libmachine: Parsing certificate...
	I0816 05:41:21.513457    9052 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:41:21.513481    9052 main.go:141] libmachine: Decoding PEM data...
	I0816 05:41:21.513490    9052 main.go:141] libmachine: Parsing certificate...
	I0816 05:41:21.513901    9052 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:41:21.665778    9052 main.go:141] libmachine: Creating SSH key...
	I0816 05:41:21.689299    9052 main.go:141] libmachine: Creating Disk image...
	I0816 05:41:21.689307    9052 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:41:21.689485    9052 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/auto-998000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/auto-998000/disk.qcow2
	I0816 05:41:21.698786    9052 main.go:141] libmachine: STDOUT: 
	I0816 05:41:21.698813    9052 main.go:141] libmachine: STDERR: 
	I0816 05:41:21.698872    9052 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/auto-998000/disk.qcow2 +20000M
	I0816 05:41:21.706985    9052 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:41:21.707001    9052 main.go:141] libmachine: STDERR: 
	I0816 05:41:21.707019    9052 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/auto-998000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/auto-998000/disk.qcow2
	I0816 05:41:21.707025    9052 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:41:21.707036    9052 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:41:21.707058    9052 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/auto-998000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/auto-998000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/auto-998000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:31:73:a6:88:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/auto-998000/disk.qcow2
	I0816 05:41:21.708664    9052 main.go:141] libmachine: STDOUT: 
	I0816 05:41:21.708679    9052 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:41:21.708697    9052 client.go:171] duration metric: took 195.39425ms to LocalClient.Create
	I0816 05:41:23.710879    9052 start.go:128] duration metric: took 2.219160041s to createHost
	I0816 05:41:23.711003    9052 start.go:83] releasing machines lock for "auto-998000", held for 2.219356291s
	W0816 05:41:23.711085    9052 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:41:23.721229    9052 out.go:177] * Deleting "auto-998000" in qemu2 ...
	W0816 05:41:23.752327    9052 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:41:23.752356    9052 start.go:729] Will try again in 5 seconds ...
	I0816 05:41:28.754505    9052 start.go:360] acquireMachinesLock for auto-998000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:41:28.754980    9052 start.go:364] duration metric: took 394.542µs to acquireMachinesLock for "auto-998000"
	I0816 05:41:28.755098    9052 start.go:93] Provisioning new machine with config: &{Name:auto-998000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:auto-998000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:41:28.755313    9052 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:41:28.766950    9052 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0816 05:41:28.811187    9052 start.go:159] libmachine.API.Create for "auto-998000" (driver="qemu2")
	I0816 05:41:28.811234    9052 client.go:168] LocalClient.Create starting
	I0816 05:41:28.811343    9052 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:41:28.811409    9052 main.go:141] libmachine: Decoding PEM data...
	I0816 05:41:28.811424    9052 main.go:141] libmachine: Parsing certificate...
	I0816 05:41:28.811486    9052 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:41:28.811526    9052 main.go:141] libmachine: Decoding PEM data...
	I0816 05:41:28.811537    9052 main.go:141] libmachine: Parsing certificate...
	I0816 05:41:28.812125    9052 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:41:28.970953    9052 main.go:141] libmachine: Creating SSH key...
	I0816 05:41:29.108692    9052 main.go:141] libmachine: Creating Disk image...
	I0816 05:41:29.108700    9052 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:41:29.108901    9052 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/auto-998000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/auto-998000/disk.qcow2
	I0816 05:41:29.118496    9052 main.go:141] libmachine: STDOUT: 
	I0816 05:41:29.118517    9052 main.go:141] libmachine: STDERR: 
	I0816 05:41:29.118571    9052 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/auto-998000/disk.qcow2 +20000M
	I0816 05:41:29.126671    9052 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:41:29.126689    9052 main.go:141] libmachine: STDERR: 
	I0816 05:41:29.126701    9052 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/auto-998000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/auto-998000/disk.qcow2
	I0816 05:41:29.126707    9052 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:41:29.126718    9052 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:41:29.126749    9052 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/auto-998000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/auto-998000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/auto-998000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:c2:ee:f5:25:3b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/auto-998000/disk.qcow2
	I0816 05:41:29.128469    9052 main.go:141] libmachine: STDOUT: 
	I0816 05:41:29.128492    9052 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:41:29.128511    9052 client.go:171] duration metric: took 317.277834ms to LocalClient.Create
	I0816 05:41:31.130691    9052 start.go:128] duration metric: took 2.3753635s to createHost
	I0816 05:41:31.130806    9052 start.go:83] releasing machines lock for "auto-998000", held for 2.375845416s
	W0816 05:41:31.131168    9052 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-998000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-998000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:41:31.140923    9052 out.go:201] 
	W0816 05:41:31.147969    9052 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:41:31.147995    9052 out.go:270] * 
	* 
	W0816 05:41:31.150585    9052 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:41:31.159961    9052 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-998000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-998000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.846591042s)

                                                
                                                
-- stdout --
	* [kindnet-998000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-998000" primary control-plane node in "kindnet-998000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-998000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:41:33.432438    9162 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:41:33.432573    9162 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:41:33.432576    9162 out.go:358] Setting ErrFile to fd 2...
	I0816 05:41:33.432579    9162 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:41:33.432717    9162 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:41:33.433877    9162 out.go:352] Setting JSON to false
	I0816 05:41:33.450188    9162 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6062,"bootTime":1723806031,"procs":503,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0816 05:41:33.450250    9162 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:41:33.456217    9162 out.go:177] * [kindnet-998000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 05:41:33.463361    9162 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:41:33.463413    9162 notify.go:220] Checking for updates...
	I0816 05:41:33.472314    9162 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:41:33.476241    9162 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 05:41:33.483365    9162 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:41:33.486268    9162 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	I0816 05:41:33.489386    9162 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 05:41:33.492712    9162 config.go:182] Loaded profile config "multinode-569000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:41:33.492781    9162 config.go:182] Loaded profile config "stopped-upgrade-972000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0816 05:41:33.492829    9162 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:41:33.496324    9162 out.go:177] * Using the qemu2 driver based on user configuration
	I0816 05:41:33.503386    9162 start.go:297] selected driver: qemu2
	I0816 05:41:33.503400    9162 start.go:901] validating driver "qemu2" against <nil>
	I0816 05:41:33.503408    9162 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:41:33.505909    9162 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 05:41:33.508306    9162 out.go:177] * Automatically selected the socket_vmnet network
	I0816 05:41:33.512413    9162 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 05:41:33.512444    9162 cni.go:84] Creating CNI manager for "kindnet"
	I0816 05:41:33.512448    9162 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0816 05:41:33.512479    9162 start.go:340] cluster config:
	{Name:kindnet-998000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kindnet-998000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:41:33.516107    9162 iso.go:125] acquiring lock: {Name:mkee7fdae783c25a15c40888f5bdc01a171155d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:41:33.524303    9162 out.go:177] * Starting "kindnet-998000" primary control-plane node in "kindnet-998000" cluster
	I0816 05:41:33.528344    9162 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:41:33.528364    9162 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 05:41:33.528370    9162 cache.go:56] Caching tarball of preloaded images
	I0816 05:41:33.528433    9162 preload.go:172] Found /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 05:41:33.528438    9162 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 05:41:33.528503    9162 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/kindnet-998000/config.json ...
	I0816 05:41:33.528514    9162 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/kindnet-998000/config.json: {Name:mk3f03c3f64669106b95264310416acfb323050b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:41:33.528804    9162 start.go:360] acquireMachinesLock for kindnet-998000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:41:33.528847    9162 start.go:364] duration metric: took 35.416µs to acquireMachinesLock for "kindnet-998000"
	I0816 05:41:33.528861    9162 start.go:93] Provisioning new machine with config: &{Name:kindnet-998000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kindnet-998000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:41:33.528890    9162 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:41:33.537359    9162 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0816 05:41:33.552536    9162 start.go:159] libmachine.API.Create for "kindnet-998000" (driver="qemu2")
	I0816 05:41:33.552561    9162 client.go:168] LocalClient.Create starting
	I0816 05:41:33.552623    9162 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:41:33.552653    9162 main.go:141] libmachine: Decoding PEM data...
	I0816 05:41:33.552662    9162 main.go:141] libmachine: Parsing certificate...
	I0816 05:41:33.552696    9162 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:41:33.552719    9162 main.go:141] libmachine: Decoding PEM data...
	I0816 05:41:33.552729    9162 main.go:141] libmachine: Parsing certificate...
	I0816 05:41:33.553136    9162 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:41:33.702569    9162 main.go:141] libmachine: Creating SSH key...
	I0816 05:41:33.762025    9162 main.go:141] libmachine: Creating Disk image...
	I0816 05:41:33.762036    9162 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:41:33.762224    9162 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kindnet-998000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kindnet-998000/disk.qcow2
	I0816 05:41:33.771482    9162 main.go:141] libmachine: STDOUT: 
	I0816 05:41:33.771505    9162 main.go:141] libmachine: STDERR: 
	I0816 05:41:33.771551    9162 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kindnet-998000/disk.qcow2 +20000M
	I0816 05:41:33.779893    9162 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:41:33.779911    9162 main.go:141] libmachine: STDERR: 
	I0816 05:41:33.779934    9162 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kindnet-998000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kindnet-998000/disk.qcow2
	I0816 05:41:33.779939    9162 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:41:33.779950    9162 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:41:33.779976    9162 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kindnet-998000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kindnet-998000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kindnet-998000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:63:92:aa:6b:7f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kindnet-998000/disk.qcow2
	I0816 05:41:33.781714    9162 main.go:141] libmachine: STDOUT: 
	I0816 05:41:33.781727    9162 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:41:33.781751    9162 client.go:171] duration metric: took 229.189875ms to LocalClient.Create
	I0816 05:41:35.783939    9162 start.go:128] duration metric: took 2.255054375s to createHost
	I0816 05:41:35.784040    9162 start.go:83] releasing machines lock for "kindnet-998000", held for 2.255220458s
	W0816 05:41:35.784097    9162 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:41:35.797103    9162 out.go:177] * Deleting "kindnet-998000" in qemu2 ...
	W0816 05:41:35.824150    9162 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:41:35.824173    9162 start.go:729] Will try again in 5 seconds ...
	I0816 05:41:40.826256    9162 start.go:360] acquireMachinesLock for kindnet-998000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:41:40.826732    9162 start.go:364] duration metric: took 383.541µs to acquireMachinesLock for "kindnet-998000"
	I0816 05:41:40.826908    9162 start.go:93] Provisioning new machine with config: &{Name:kindnet-998000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kindnet-998000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:41:40.827177    9162 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:41:40.831789    9162 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0816 05:41:40.874945    9162 start.go:159] libmachine.API.Create for "kindnet-998000" (driver="qemu2")
	I0816 05:41:40.874992    9162 client.go:168] LocalClient.Create starting
	I0816 05:41:40.875195    9162 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:41:40.875264    9162 main.go:141] libmachine: Decoding PEM data...
	I0816 05:41:40.875279    9162 main.go:141] libmachine: Parsing certificate...
	I0816 05:41:40.875338    9162 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:41:40.875376    9162 main.go:141] libmachine: Decoding PEM data...
	I0816 05:41:40.875389    9162 main.go:141] libmachine: Parsing certificate...
	I0816 05:41:40.875906    9162 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:41:41.034638    9162 main.go:141] libmachine: Creating SSH key...
	I0816 05:41:41.192224    9162 main.go:141] libmachine: Creating Disk image...
	I0816 05:41:41.192238    9162 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:41:41.192446    9162 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kindnet-998000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kindnet-998000/disk.qcow2
	I0816 05:41:41.202097    9162 main.go:141] libmachine: STDOUT: 
	I0816 05:41:41.202120    9162 main.go:141] libmachine: STDERR: 
	I0816 05:41:41.202195    9162 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kindnet-998000/disk.qcow2 +20000M
	I0816 05:41:41.210503    9162 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:41:41.210520    9162 main.go:141] libmachine: STDERR: 
	I0816 05:41:41.210539    9162 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kindnet-998000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kindnet-998000/disk.qcow2
	I0816 05:41:41.210544    9162 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:41:41.210561    9162 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:41:41.210596    9162 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kindnet-998000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kindnet-998000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kindnet-998000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:8b:31:90:ac:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kindnet-998000/disk.qcow2
	I0816 05:41:41.212337    9162 main.go:141] libmachine: STDOUT: 
	I0816 05:41:41.212355    9162 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:41:41.212367    9162 client.go:171] duration metric: took 337.374667ms to LocalClient.Create
	I0816 05:41:43.214548    9162 start.go:128] duration metric: took 2.387349375s to createHost
	I0816 05:41:43.214668    9162 start.go:83] releasing machines lock for "kindnet-998000", held for 2.387944s
	W0816 05:41:43.215097    9162 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-998000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-998000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:41:43.225611    9162 out.go:201] 
	W0816 05:41:43.228703    9162 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:41:43.228728    9162 out.go:270] * 
	* 
	W0816 05:41:43.231444    9162 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:41:43.241617    9162 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-998000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-998000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.80358s)

                                                
                                                
-- stdout --
	* [calico-998000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-998000" primary control-plane node in "calico-998000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-998000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:41:45.555179    9278 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:41:45.555320    9278 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:41:45.555324    9278 out.go:358] Setting ErrFile to fd 2...
	I0816 05:41:45.555326    9278 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:41:45.555465    9278 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:41:45.556820    9278 out.go:352] Setting JSON to false
	I0816 05:41:45.573645    9278 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6074,"bootTime":1723806031,"procs":506,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0816 05:41:45.573717    9278 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:41:45.580924    9278 out.go:177] * [calico-998000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 05:41:45.587940    9278 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:41:45.588024    9278 notify.go:220] Checking for updates...
	I0816 05:41:45.594914    9278 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:41:45.597962    9278 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 05:41:45.600932    9278 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:41:45.603979    9278 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	I0816 05:41:45.606968    9278 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 05:41:45.614276    9278 config.go:182] Loaded profile config "multinode-569000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:41:45.614347    9278 config.go:182] Loaded profile config "stopped-upgrade-972000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0816 05:41:45.614400    9278 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:41:45.617943    9278 out.go:177] * Using the qemu2 driver based on user configuration
	I0816 05:41:45.625022    9278 start.go:297] selected driver: qemu2
	I0816 05:41:45.625028    9278 start.go:901] validating driver "qemu2" against <nil>
	I0816 05:41:45.625040    9278 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:41:45.627263    9278 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 05:41:45.630947    9278 out.go:177] * Automatically selected the socket_vmnet network
	I0816 05:41:45.634033    9278 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 05:41:45.634070    9278 cni.go:84] Creating CNI manager for "calico"
	I0816 05:41:45.634074    9278 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0816 05:41:45.634100    9278 start.go:340] cluster config:
	{Name:calico-998000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:calico-998000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:41:45.637603    9278 iso.go:125] acquiring lock: {Name:mkee7fdae783c25a15c40888f5bdc01a171155d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:41:45.644981    9278 out.go:177] * Starting "calico-998000" primary control-plane node in "calico-998000" cluster
	I0816 05:41:45.648946    9278 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:41:45.648963    9278 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 05:41:45.648974    9278 cache.go:56] Caching tarball of preloaded images
	I0816 05:41:45.649033    9278 preload.go:172] Found /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 05:41:45.649039    9278 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 05:41:45.649112    9278 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/calico-998000/config.json ...
	I0816 05:41:45.649123    9278 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/calico-998000/config.json: {Name:mk0b99f6b49847e88252d71c569d045fe2933d9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:41:45.649373    9278 start.go:360] acquireMachinesLock for calico-998000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:41:45.649402    9278 start.go:364] duration metric: took 24.333µs to acquireMachinesLock for "calico-998000"
	I0816 05:41:45.649413    9278 start.go:93] Provisioning new machine with config: &{Name:calico-998000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:calico-998000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:41:45.649452    9278 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:41:45.656988    9278 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0816 05:41:45.671950    9278 start.go:159] libmachine.API.Create for "calico-998000" (driver="qemu2")
	I0816 05:41:45.671978    9278 client.go:168] LocalClient.Create starting
	I0816 05:41:45.672044    9278 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:41:45.672075    9278 main.go:141] libmachine: Decoding PEM data...
	I0816 05:41:45.672084    9278 main.go:141] libmachine: Parsing certificate...
	I0816 05:41:45.672123    9278 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:41:45.672145    9278 main.go:141] libmachine: Decoding PEM data...
	I0816 05:41:45.672152    9278 main.go:141] libmachine: Parsing certificate...
	I0816 05:41:45.672540    9278 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:41:45.826115    9278 main.go:141] libmachine: Creating SSH key...
	I0816 05:41:45.962129    9278 main.go:141] libmachine: Creating Disk image...
	I0816 05:41:45.962140    9278 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:41:45.962371    9278 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/calico-998000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/calico-998000/disk.qcow2
	I0816 05:41:45.972830    9278 main.go:141] libmachine: STDOUT: 
	I0816 05:41:45.972856    9278 main.go:141] libmachine: STDERR: 
	I0816 05:41:45.972922    9278 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/calico-998000/disk.qcow2 +20000M
	I0816 05:41:45.982165    9278 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:41:45.982186    9278 main.go:141] libmachine: STDERR: 
	I0816 05:41:45.982210    9278 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/calico-998000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/calico-998000/disk.qcow2
	I0816 05:41:45.982215    9278 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:41:45.982232    9278 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:41:45.982278    9278 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/calico-998000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/calico-998000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/calico-998000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:23:ab:61:7a:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/calico-998000/disk.qcow2
	I0816 05:41:45.984328    9278 main.go:141] libmachine: STDOUT: 
	I0816 05:41:45.984349    9278 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:41:45.984368    9278 client.go:171] duration metric: took 312.38925ms to LocalClient.Create
	I0816 05:41:47.986681    9278 start.go:128] duration metric: took 2.337213625s to createHost
	I0816 05:41:47.986790    9278 start.go:83] releasing machines lock for "calico-998000", held for 2.337416875s
	W0816 05:41:47.986837    9278 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:41:48.004364    9278 out.go:177] * Deleting "calico-998000" in qemu2 ...
	W0816 05:41:48.030769    9278 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:41:48.030797    9278 start.go:729] Will try again in 5 seconds ...
	I0816 05:41:53.032831    9278 start.go:360] acquireMachinesLock for calico-998000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:41:53.033119    9278 start.go:364] duration metric: took 234µs to acquireMachinesLock for "calico-998000"
	I0816 05:41:53.033205    9278 start.go:93] Provisioning new machine with config: &{Name:calico-998000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:calico-998000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:41:53.033305    9278 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:41:53.041736    9278 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0816 05:41:53.078003    9278 start.go:159] libmachine.API.Create for "calico-998000" (driver="qemu2")
	I0816 05:41:53.078045    9278 client.go:168] LocalClient.Create starting
	I0816 05:41:53.078156    9278 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:41:53.078218    9278 main.go:141] libmachine: Decoding PEM data...
	I0816 05:41:53.078232    9278 main.go:141] libmachine: Parsing certificate...
	I0816 05:41:53.078282    9278 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:41:53.078320    9278 main.go:141] libmachine: Decoding PEM data...
	I0816 05:41:53.078333    9278 main.go:141] libmachine: Parsing certificate...
	I0816 05:41:53.078793    9278 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:41:53.243055    9278 main.go:141] libmachine: Creating SSH key...
	I0816 05:41:53.272389    9278 main.go:141] libmachine: Creating Disk image...
	I0816 05:41:53.272395    9278 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:41:53.272573    9278 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/calico-998000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/calico-998000/disk.qcow2
	I0816 05:41:53.281960    9278 main.go:141] libmachine: STDOUT: 
	I0816 05:41:53.281983    9278 main.go:141] libmachine: STDERR: 
	I0816 05:41:53.282036    9278 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/calico-998000/disk.qcow2 +20000M
	I0816 05:41:53.290153    9278 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:41:53.290177    9278 main.go:141] libmachine: STDERR: 
	I0816 05:41:53.290194    9278 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/calico-998000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/calico-998000/disk.qcow2
	I0816 05:41:53.290199    9278 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:41:53.290210    9278 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:41:53.290237    9278 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/calico-998000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/calico-998000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/calico-998000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:02:d9:12:fb:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/calico-998000/disk.qcow2
	I0816 05:41:53.291895    9278 main.go:141] libmachine: STDOUT: 
	I0816 05:41:53.291916    9278 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:41:53.291930    9278 client.go:171] duration metric: took 213.884417ms to LocalClient.Create
	I0816 05:41:55.294021    9278 start.go:128] duration metric: took 2.260735834s to createHost
	I0816 05:41:55.294083    9278 start.go:83] releasing machines lock for "calico-998000", held for 2.260988583s
	W0816 05:41:55.294341    9278 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-998000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-998000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:41:55.303808    9278 out.go:201] 
	W0816 05:41:55.308729    9278 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:41:55.308746    9278 out.go:270] * 
	* 
	W0816 05:41:55.309919    9278 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:41:55.320722    9278 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-998000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-998000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.898057334s)

                                                
                                                
-- stdout --
	* [custom-flannel-998000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-998000" primary control-plane node in "custom-flannel-998000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-998000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:41:57.743023    9395 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:41:57.743156    9395 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:41:57.743160    9395 out.go:358] Setting ErrFile to fd 2...
	I0816 05:41:57.743162    9395 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:41:57.743279    9395 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:41:57.744321    9395 out.go:352] Setting JSON to false
	I0816 05:41:57.760598    9395 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6086,"bootTime":1723806031,"procs":505,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0816 05:41:57.760655    9395 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:41:57.766473    9395 out.go:177] * [custom-flannel-998000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 05:41:57.773453    9395 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:41:57.773539    9395 notify.go:220] Checking for updates...
	I0816 05:41:57.780391    9395 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:41:57.783471    9395 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 05:41:57.787424    9395 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:41:57.790401    9395 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	I0816 05:41:57.793442    9395 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 05:41:57.796709    9395 config.go:182] Loaded profile config "multinode-569000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:41:57.796776    9395 config.go:182] Loaded profile config "stopped-upgrade-972000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0816 05:41:57.796824    9395 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:41:57.800390    9395 out.go:177] * Using the qemu2 driver based on user configuration
	I0816 05:41:57.807435    9395 start.go:297] selected driver: qemu2
	I0816 05:41:57.807444    9395 start.go:901] validating driver "qemu2" against <nil>
	I0816 05:41:57.807451    9395 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:41:57.809585    9395 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 05:41:57.813421    9395 out.go:177] * Automatically selected the socket_vmnet network
	I0816 05:41:57.816436    9395 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 05:41:57.816453    9395 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0816 05:41:57.816463    9395 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0816 05:41:57.816490    9395 start.go:340] cluster config:
	{Name:custom-flannel-998000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-998000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:41:57.819993    9395 iso.go:125] acquiring lock: {Name:mkee7fdae783c25a15c40888f5bdc01a171155d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:41:57.827380    9395 out.go:177] * Starting "custom-flannel-998000" primary control-plane node in "custom-flannel-998000" cluster
	I0816 05:41:57.831444    9395 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:41:57.831459    9395 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 05:41:57.831469    9395 cache.go:56] Caching tarball of preloaded images
	I0816 05:41:57.831528    9395 preload.go:172] Found /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 05:41:57.831535    9395 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 05:41:57.831617    9395 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/custom-flannel-998000/config.json ...
	I0816 05:41:57.831633    9395 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/custom-flannel-998000/config.json: {Name:mk85b024fe005078a321770fb8703b5dce72e695 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:41:57.831993    9395 start.go:360] acquireMachinesLock for custom-flannel-998000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:41:57.832037    9395 start.go:364] duration metric: took 32.416µs to acquireMachinesLock for "custom-flannel-998000"
	I0816 05:41:57.832052    9395 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-998000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-998000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:41:57.832083    9395 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:41:57.839452    9395 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0816 05:41:57.856317    9395 start.go:159] libmachine.API.Create for "custom-flannel-998000" (driver="qemu2")
	I0816 05:41:57.856352    9395 client.go:168] LocalClient.Create starting
	I0816 05:41:57.856415    9395 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:41:57.856445    9395 main.go:141] libmachine: Decoding PEM data...
	I0816 05:41:57.856458    9395 main.go:141] libmachine: Parsing certificate...
	I0816 05:41:57.856504    9395 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:41:57.856526    9395 main.go:141] libmachine: Decoding PEM data...
	I0816 05:41:57.856531    9395 main.go:141] libmachine: Parsing certificate...
	I0816 05:41:57.856934    9395 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:41:58.010844    9395 main.go:141] libmachine: Creating SSH key...
	I0816 05:41:58.249768    9395 main.go:141] libmachine: Creating Disk image...
	I0816 05:41:58.249783    9395 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:41:58.250024    9395 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/custom-flannel-998000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/custom-flannel-998000/disk.qcow2
	I0816 05:41:58.259862    9395 main.go:141] libmachine: STDOUT: 
	I0816 05:41:58.259883    9395 main.go:141] libmachine: STDERR: 
	I0816 05:41:58.259952    9395 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/custom-flannel-998000/disk.qcow2 +20000M
	I0816 05:41:58.268215    9395 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:41:58.268229    9395 main.go:141] libmachine: STDERR: 
	I0816 05:41:58.268245    9395 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/custom-flannel-998000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/custom-flannel-998000/disk.qcow2
	I0816 05:41:58.268249    9395 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:41:58.268263    9395 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:41:58.268296    9395 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/custom-flannel-998000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/custom-flannel-998000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/custom-flannel-998000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:ab:3c:34:4a:fb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/custom-flannel-998000/disk.qcow2
	I0816 05:41:58.269960    9395 main.go:141] libmachine: STDOUT: 
	I0816 05:41:58.269976    9395 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:41:58.269995    9395 client.go:171] duration metric: took 413.641083ms to LocalClient.Create
	I0816 05:42:00.272199    9395 start.go:128] duration metric: took 2.440125208s to createHost
	I0816 05:42:00.272272    9395 start.go:83] releasing machines lock for "custom-flannel-998000", held for 2.440265125s
	W0816 05:42:00.272346    9395 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:42:00.278053    9395 out.go:177] * Deleting "custom-flannel-998000" in qemu2 ...
	W0816 05:42:00.305608    9395 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:42:00.305632    9395 start.go:729] Will try again in 5 seconds ...
	I0816 05:42:05.307592    9395 start.go:360] acquireMachinesLock for custom-flannel-998000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:42:05.307949    9395 start.go:364] duration metric: took 287.416µs to acquireMachinesLock for "custom-flannel-998000"
	I0816 05:42:05.307986    9395 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-998000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-998000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:42:05.308145    9395 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:42:05.315476    9395 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0816 05:42:05.351891    9395 start.go:159] libmachine.API.Create for "custom-flannel-998000" (driver="qemu2")
	I0816 05:42:05.351939    9395 client.go:168] LocalClient.Create starting
	I0816 05:42:05.352056    9395 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:42:05.352116    9395 main.go:141] libmachine: Decoding PEM data...
	I0816 05:42:05.352131    9395 main.go:141] libmachine: Parsing certificate...
	I0816 05:42:05.352207    9395 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:42:05.352252    9395 main.go:141] libmachine: Decoding PEM data...
	I0816 05:42:05.352263    9395 main.go:141] libmachine: Parsing certificate...
	I0816 05:42:05.352896    9395 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:42:05.510917    9395 main.go:141] libmachine: Creating SSH key...
	I0816 05:42:05.546053    9395 main.go:141] libmachine: Creating Disk image...
	I0816 05:42:05.546064    9395 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:42:05.546237    9395 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/custom-flannel-998000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/custom-flannel-998000/disk.qcow2
	I0816 05:42:05.555601    9395 main.go:141] libmachine: STDOUT: 
	I0816 05:42:05.555618    9395 main.go:141] libmachine: STDERR: 
	I0816 05:42:05.555670    9395 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/custom-flannel-998000/disk.qcow2 +20000M
	I0816 05:42:05.563802    9395 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:42:05.563817    9395 main.go:141] libmachine: STDERR: 
	I0816 05:42:05.563828    9395 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/custom-flannel-998000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/custom-flannel-998000/disk.qcow2
	I0816 05:42:05.563834    9395 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:42:05.563845    9395 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:42:05.563873    9395 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/custom-flannel-998000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/custom-flannel-998000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/custom-flannel-998000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:0f:c2:a0:bc:6b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/custom-flannel-998000/disk.qcow2
	I0816 05:42:05.565572    9395 main.go:141] libmachine: STDOUT: 
	I0816 05:42:05.565589    9395 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:42:05.565602    9395 client.go:171] duration metric: took 213.660875ms to LocalClient.Create
	I0816 05:42:07.567722    9395 start.go:128] duration metric: took 2.259595708s to createHost
	I0816 05:42:07.567796    9395 start.go:83] releasing machines lock for "custom-flannel-998000", held for 2.259828292s
	W0816 05:42:07.567979    9395 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-998000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-998000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:42:07.584458    9395 out.go:201] 
	W0816 05:42:07.588412    9395 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:42:07.588433    9395 out.go:270] * 
	* 
	W0816 05:42:07.590175    9395 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:42:07.603349    9395 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-998000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-998000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.865353791s)

                                                
                                                
-- stdout --
	* [false-998000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-998000" primary control-plane node in "false-998000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-998000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:42:10.021708    9512 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:42:10.021828    9512 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:42:10.021832    9512 out.go:358] Setting ErrFile to fd 2...
	I0816 05:42:10.021834    9512 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:42:10.021972    9512 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:42:10.023114    9512 out.go:352] Setting JSON to false
	I0816 05:42:10.039645    9512 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6099,"bootTime":1723806031,"procs":505,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0816 05:42:10.039740    9512 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:42:10.046354    9512 out.go:177] * [false-998000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 05:42:10.053278    9512 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:42:10.053319    9512 notify.go:220] Checking for updates...
	I0816 05:42:10.059245    9512 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:42:10.062296    9512 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 05:42:10.066289    9512 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:42:10.069283    9512 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	I0816 05:42:10.072297    9512 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 05:42:10.075682    9512 config.go:182] Loaded profile config "multinode-569000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:42:10.075745    9512 config.go:182] Loaded profile config "stopped-upgrade-972000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0816 05:42:10.075802    9512 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:42:10.079311    9512 out.go:177] * Using the qemu2 driver based on user configuration
	I0816 05:42:10.086259    9512 start.go:297] selected driver: qemu2
	I0816 05:42:10.086265    9512 start.go:901] validating driver "qemu2" against <nil>
	I0816 05:42:10.086272    9512 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:42:10.088518    9512 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 05:42:10.091226    9512 out.go:177] * Automatically selected the socket_vmnet network
	I0816 05:42:10.095343    9512 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 05:42:10.095368    9512 cni.go:84] Creating CNI manager for "false"
	I0816 05:42:10.095404    9512 start.go:340] cluster config:
	{Name:false-998000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:false-998000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:42:10.098930    9512 iso.go:125] acquiring lock: {Name:mkee7fdae783c25a15c40888f5bdc01a171155d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:42:10.107251    9512 out.go:177] * Starting "false-998000" primary control-plane node in "false-998000" cluster
	I0816 05:42:10.111333    9512 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:42:10.111358    9512 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 05:42:10.111372    9512 cache.go:56] Caching tarball of preloaded images
	I0816 05:42:10.111439    9512 preload.go:172] Found /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 05:42:10.111445    9512 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 05:42:10.111539    9512 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/false-998000/config.json ...
	I0816 05:42:10.111551    9512 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/false-998000/config.json: {Name:mk991d48a94e8cb344c564cf8a7ef475d98085aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:42:10.111773    9512 start.go:360] acquireMachinesLock for false-998000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:42:10.111805    9512 start.go:364] duration metric: took 26.125µs to acquireMachinesLock for "false-998000"
	I0816 05:42:10.111817    9512 start.go:93] Provisioning new machine with config: &{Name:false-998000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:false-998000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:42:10.111857    9512 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:42:10.120264    9512 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0816 05:42:10.136943    9512 start.go:159] libmachine.API.Create for "false-998000" (driver="qemu2")
	I0816 05:42:10.136970    9512 client.go:168] LocalClient.Create starting
	I0816 05:42:10.137052    9512 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:42:10.137086    9512 main.go:141] libmachine: Decoding PEM data...
	I0816 05:42:10.137095    9512 main.go:141] libmachine: Parsing certificate...
	I0816 05:42:10.137134    9512 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:42:10.137156    9512 main.go:141] libmachine: Decoding PEM data...
	I0816 05:42:10.137164    9512 main.go:141] libmachine: Parsing certificate...
	I0816 05:42:10.137497    9512 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:42:10.288160    9512 main.go:141] libmachine: Creating SSH key...
	I0816 05:42:10.496104    9512 main.go:141] libmachine: Creating Disk image...
	I0816 05:42:10.496113    9512 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:42:10.496319    9512 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/false-998000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/false-998000/disk.qcow2
	I0816 05:42:10.506191    9512 main.go:141] libmachine: STDOUT: 
	I0816 05:42:10.506221    9512 main.go:141] libmachine: STDERR: 
	I0816 05:42:10.506275    9512 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/false-998000/disk.qcow2 +20000M
	I0816 05:42:10.514559    9512 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:42:10.514575    9512 main.go:141] libmachine: STDERR: 
	I0816 05:42:10.514599    9512 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/false-998000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/false-998000/disk.qcow2
	I0816 05:42:10.514604    9512 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:42:10.514614    9512 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:42:10.514637    9512 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/false-998000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/false-998000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/false-998000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:a8:aa:09:d5:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/false-998000/disk.qcow2
	I0816 05:42:10.516408    9512 main.go:141] libmachine: STDOUT: 
	I0816 05:42:10.516431    9512 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:42:10.516454    9512 client.go:171] duration metric: took 379.484791ms to LocalClient.Create
	I0816 05:42:12.518618    9512 start.go:128] duration metric: took 2.406778625s to createHost
	I0816 05:42:12.518679    9512 start.go:83] releasing machines lock for "false-998000", held for 2.406906709s
	W0816 05:42:12.518730    9512 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:42:12.534685    9512 out.go:177] * Deleting "false-998000" in qemu2 ...
	W0816 05:42:12.559257    9512 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:42:12.559282    9512 start.go:729] Will try again in 5 seconds ...
	I0816 05:42:17.559372    9512 start.go:360] acquireMachinesLock for false-998000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:42:17.559492    9512 start.go:364] duration metric: took 95.833µs to acquireMachinesLock for "false-998000"
	I0816 05:42:17.559505    9512 start.go:93] Provisioning new machine with config: &{Name:false-998000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:false-998000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:42:17.559545    9512 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:42:17.566695    9512 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0816 05:42:17.583011    9512 start.go:159] libmachine.API.Create for "false-998000" (driver="qemu2")
	I0816 05:42:17.583037    9512 client.go:168] LocalClient.Create starting
	I0816 05:42:17.583096    9512 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:42:17.583134    9512 main.go:141] libmachine: Decoding PEM data...
	I0816 05:42:17.583143    9512 main.go:141] libmachine: Parsing certificate...
	I0816 05:42:17.583180    9512 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:42:17.583204    9512 main.go:141] libmachine: Decoding PEM data...
	I0816 05:42:17.583211    9512 main.go:141] libmachine: Parsing certificate...
	I0816 05:42:17.583490    9512 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:42:17.734918    9512 main.go:141] libmachine: Creating SSH key...
	I0816 05:42:17.794185    9512 main.go:141] libmachine: Creating Disk image...
	I0816 05:42:17.794193    9512 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:42:17.794374    9512 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/false-998000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/false-998000/disk.qcow2
	I0816 05:42:17.804156    9512 main.go:141] libmachine: STDOUT: 
	I0816 05:42:17.804177    9512 main.go:141] libmachine: STDERR: 
	I0816 05:42:17.804223    9512 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/false-998000/disk.qcow2 +20000M
	I0816 05:42:17.812374    9512 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:42:17.812392    9512 main.go:141] libmachine: STDERR: 
	I0816 05:42:17.812406    9512 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/false-998000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/false-998000/disk.qcow2
	I0816 05:42:17.812410    9512 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:42:17.812416    9512 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:42:17.812472    9512 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/false-998000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/false-998000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/false-998000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:4c:af:93:3b:9c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/false-998000/disk.qcow2
	I0816 05:42:17.814269    9512 main.go:141] libmachine: STDOUT: 
	I0816 05:42:17.814293    9512 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:42:17.814307    9512 client.go:171] duration metric: took 231.270959ms to LocalClient.Create
	I0816 05:42:19.816481    9512 start.go:128] duration metric: took 2.256944708s to createHost
	I0816 05:42:19.816593    9512 start.go:83] releasing machines lock for "false-998000", held for 2.257120042s
	W0816 05:42:19.816978    9512 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-998000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-998000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:42:19.828609    9512 out.go:201] 
	W0816 05:42:19.833680    9512 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:42:19.833732    9512 out.go:270] * 
	* 
	W0816 05:42:19.836038    9512 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:42:19.845546    9512 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-998000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-998000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.770053416s)

                                                
                                                
-- stdout --
	* [enable-default-cni-998000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-998000" primary control-plane node in "enable-default-cni-998000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-998000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:42:21.982154    9621 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:42:21.982317    9621 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:42:21.982320    9621 out.go:358] Setting ErrFile to fd 2...
	I0816 05:42:21.982323    9621 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:42:21.982442    9621 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:42:21.983555    9621 out.go:352] Setting JSON to false
	I0816 05:42:22.000217    9621 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6110,"bootTime":1723806031,"procs":502,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0816 05:42:22.000302    9621 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:42:22.007599    9621 out.go:177] * [enable-default-cni-998000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 05:42:22.014565    9621 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:42:22.014661    9621 notify.go:220] Checking for updates...
	I0816 05:42:22.021586    9621 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:42:22.024605    9621 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 05:42:22.027577    9621 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:42:22.030660    9621 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	I0816 05:42:22.033620    9621 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 05:42:22.036965    9621 config.go:182] Loaded profile config "multinode-569000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:42:22.037030    9621 config.go:182] Loaded profile config "stopped-upgrade-972000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0816 05:42:22.037096    9621 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:42:22.041667    9621 out.go:177] * Using the qemu2 driver based on user configuration
	I0816 05:42:22.048550    9621 start.go:297] selected driver: qemu2
	I0816 05:42:22.048556    9621 start.go:901] validating driver "qemu2" against <nil>
	I0816 05:42:22.048562    9621 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:42:22.050942    9621 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 05:42:22.054550    9621 out.go:177] * Automatically selected the socket_vmnet network
	E0816 05:42:22.055712    9621 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0816 05:42:22.055726    9621 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 05:42:22.055745    9621 cni.go:84] Creating CNI manager for "bridge"
	I0816 05:42:22.055749    9621 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 05:42:22.055775    9621 start.go:340] cluster config:
	{Name:enable-default-cni-998000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-998000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:42:22.059472    9621 iso.go:125] acquiring lock: {Name:mkee7fdae783c25a15c40888f5bdc01a171155d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:42:22.066596    9621 out.go:177] * Starting "enable-default-cni-998000" primary control-plane node in "enable-default-cni-998000" cluster
	I0816 05:42:22.070586    9621 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:42:22.070604    9621 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 05:42:22.070613    9621 cache.go:56] Caching tarball of preloaded images
	I0816 05:42:22.070676    9621 preload.go:172] Found /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 05:42:22.070688    9621 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 05:42:22.070756    9621 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/enable-default-cni-998000/config.json ...
	I0816 05:42:22.070774    9621 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/enable-default-cni-998000/config.json: {Name:mk3ddc3aa41ca78460718069d38ceb822e569710 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:42:22.071121    9621 start.go:360] acquireMachinesLock for enable-default-cni-998000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:42:22.071152    9621 start.go:364] duration metric: took 25.167µs to acquireMachinesLock for "enable-default-cni-998000"
	I0816 05:42:22.071163    9621 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-998000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-998000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:42:22.071192    9621 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:42:22.078539    9621 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0816 05:42:22.093396    9621 start.go:159] libmachine.API.Create for "enable-default-cni-998000" (driver="qemu2")
	I0816 05:42:22.093420    9621 client.go:168] LocalClient.Create starting
	I0816 05:42:22.093481    9621 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:42:22.093511    9621 main.go:141] libmachine: Decoding PEM data...
	I0816 05:42:22.093519    9621 main.go:141] libmachine: Parsing certificate...
	I0816 05:42:22.093554    9621 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:42:22.093584    9621 main.go:141] libmachine: Decoding PEM data...
	I0816 05:42:22.093589    9621 main.go:141] libmachine: Parsing certificate...
	I0816 05:42:22.094044    9621 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:42:22.246869    9621 main.go:141] libmachine: Creating SSH key...
	I0816 05:42:22.318124    9621 main.go:141] libmachine: Creating Disk image...
	I0816 05:42:22.318129    9621 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:42:22.318307    9621 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/enable-default-cni-998000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/enable-default-cni-998000/disk.qcow2
	I0816 05:42:22.327697    9621 main.go:141] libmachine: STDOUT: 
	I0816 05:42:22.327714    9621 main.go:141] libmachine: STDERR: 
	I0816 05:42:22.327760    9621 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/enable-default-cni-998000/disk.qcow2 +20000M
	I0816 05:42:22.335620    9621 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:42:22.335637    9621 main.go:141] libmachine: STDERR: 
	I0816 05:42:22.335660    9621 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/enable-default-cni-998000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/enable-default-cni-998000/disk.qcow2
	I0816 05:42:22.335664    9621 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:42:22.335676    9621 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:42:22.335703    9621 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/enable-default-cni-998000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/enable-default-cni-998000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/enable-default-cni-998000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:96:e5:62:dc:f8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/enable-default-cni-998000/disk.qcow2
	I0816 05:42:22.337373    9621 main.go:141] libmachine: STDOUT: 
	I0816 05:42:22.337389    9621 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:42:22.337410    9621 client.go:171] duration metric: took 243.989542ms to LocalClient.Create
	I0816 05:42:24.339712    9621 start.go:128] duration metric: took 2.268493167s to createHost
	I0816 05:42:24.339796    9621 start.go:83] releasing machines lock for "enable-default-cni-998000", held for 2.268672125s
	W0816 05:42:24.339861    9621 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:42:24.352109    9621 out.go:177] * Deleting "enable-default-cni-998000" in qemu2 ...
	W0816 05:42:24.381353    9621 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:42:24.381383    9621 start.go:729] Will try again in 5 seconds ...
	I0816 05:42:29.382285    9621 start.go:360] acquireMachinesLock for enable-default-cni-998000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:42:29.382549    9621 start.go:364] duration metric: took 225.375µs to acquireMachinesLock for "enable-default-cni-998000"
	I0816 05:42:29.382585    9621 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-998000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-998000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:42:29.382755    9621 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:42:29.389096    9621 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0816 05:42:29.423050    9621 start.go:159] libmachine.API.Create for "enable-default-cni-998000" (driver="qemu2")
	I0816 05:42:29.423096    9621 client.go:168] LocalClient.Create starting
	I0816 05:42:29.423197    9621 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:42:29.423280    9621 main.go:141] libmachine: Decoding PEM data...
	I0816 05:42:29.423294    9621 main.go:141] libmachine: Parsing certificate...
	I0816 05:42:29.423347    9621 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:42:29.423385    9621 main.go:141] libmachine: Decoding PEM data...
	I0816 05:42:29.423397    9621 main.go:141] libmachine: Parsing certificate...
	I0816 05:42:29.423869    9621 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:42:29.577660    9621 main.go:141] libmachine: Creating SSH key...
	I0816 05:42:29.668803    9621 main.go:141] libmachine: Creating Disk image...
	I0816 05:42:29.668810    9621 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:42:29.668996    9621 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/enable-default-cni-998000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/enable-default-cni-998000/disk.qcow2
	I0816 05:42:29.678448    9621 main.go:141] libmachine: STDOUT: 
	I0816 05:42:29.678468    9621 main.go:141] libmachine: STDERR: 
	I0816 05:42:29.678522    9621 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/enable-default-cni-998000/disk.qcow2 +20000M
	I0816 05:42:29.686732    9621 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:42:29.686750    9621 main.go:141] libmachine: STDERR: 
	I0816 05:42:29.686761    9621 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/enable-default-cni-998000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/enable-default-cni-998000/disk.qcow2
	I0816 05:42:29.686766    9621 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:42:29.686780    9621 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:42:29.686807    9621 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/enable-default-cni-998000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/enable-default-cni-998000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/enable-default-cni-998000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:85:c2:c7:cf:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/enable-default-cni-998000/disk.qcow2
	I0816 05:42:29.688579    9621 main.go:141] libmachine: STDOUT: 
	I0816 05:42:29.688596    9621 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:42:29.688610    9621 client.go:171] duration metric: took 265.515041ms to LocalClient.Create
	I0816 05:42:31.690725    9621 start.go:128] duration metric: took 2.307993333s to createHost
	I0816 05:42:31.690756    9621 start.go:83] releasing machines lock for "enable-default-cni-998000", held for 2.308232958s
	W0816 05:42:31.690889    9621 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-998000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-998000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:42:31.701102    9621 out.go:201] 
	W0816 05:42:31.704230    9621 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:42:31.704237    9621 out.go:270] * 
	* 
	W0816 05:42:31.704816    9621 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:42:31.713203    9621 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-998000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-998000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.918764833s)

                                                
                                                
-- stdout --
	* [flannel-998000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-998000" primary control-plane node in "flannel-998000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-998000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:42:33.854274    9730 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:42:33.854410    9730 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:42:33.854413    9730 out.go:358] Setting ErrFile to fd 2...
	I0816 05:42:33.854416    9730 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:42:33.854551    9730 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:42:33.855650    9730 out.go:352] Setting JSON to false
	I0816 05:42:33.872011    9730 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6122,"bootTime":1723806031,"procs":502,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0816 05:42:33.872108    9730 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:42:33.879136    9730 out.go:177] * [flannel-998000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 05:42:33.887127    9730 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:42:33.887183    9730 notify.go:220] Checking for updates...
	I0816 05:42:33.894009    9730 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:42:33.897069    9730 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 05:42:33.901119    9730 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:42:33.904097    9730 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	I0816 05:42:33.907098    9730 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 05:42:33.910523    9730 config.go:182] Loaded profile config "multinode-569000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:42:33.910589    9730 config.go:182] Loaded profile config "stopped-upgrade-972000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0816 05:42:33.910641    9730 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:42:33.915033    9730 out.go:177] * Using the qemu2 driver based on user configuration
	I0816 05:42:33.922174    9730 start.go:297] selected driver: qemu2
	I0816 05:42:33.922182    9730 start.go:901] validating driver "qemu2" against <nil>
	I0816 05:42:33.922191    9730 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:42:33.924425    9730 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 05:42:33.928111    9730 out.go:177] * Automatically selected the socket_vmnet network
	I0816 05:42:33.931170    9730 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 05:42:33.931190    9730 cni.go:84] Creating CNI manager for "flannel"
	I0816 05:42:33.931194    9730 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0816 05:42:33.931228    9730 start.go:340] cluster config:
	{Name:flannel-998000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:flannel-998000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:42:33.934808    9730 iso.go:125] acquiring lock: {Name:mkee7fdae783c25a15c40888f5bdc01a171155d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:42:33.941068    9730 out.go:177] * Starting "flannel-998000" primary control-plane node in "flannel-998000" cluster
	I0816 05:42:33.945099    9730 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:42:33.945113    9730 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 05:42:33.945124    9730 cache.go:56] Caching tarball of preloaded images
	I0816 05:42:33.945178    9730 preload.go:172] Found /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 05:42:33.945184    9730 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 05:42:33.945246    9730 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/flannel-998000/config.json ...
	I0816 05:42:33.945257    9730 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/flannel-998000/config.json: {Name:mkc45abac5595b6638820a4f5a623ae5504a333c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:42:33.945471    9730 start.go:360] acquireMachinesLock for flannel-998000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:42:33.945506    9730 start.go:364] duration metric: took 28.75µs to acquireMachinesLock for "flannel-998000"
	I0816 05:42:33.945519    9730 start.go:93] Provisioning new machine with config: &{Name:flannel-998000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:flannel-998000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:42:33.945555    9730 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:42:33.953084    9730 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0816 05:42:33.968281    9730 start.go:159] libmachine.API.Create for "flannel-998000" (driver="qemu2")
	I0816 05:42:33.968305    9730 client.go:168] LocalClient.Create starting
	I0816 05:42:33.968363    9730 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:42:33.968393    9730 main.go:141] libmachine: Decoding PEM data...
	I0816 05:42:33.968402    9730 main.go:141] libmachine: Parsing certificate...
	I0816 05:42:33.968443    9730 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:42:33.968465    9730 main.go:141] libmachine: Decoding PEM data...
	I0816 05:42:33.968476    9730 main.go:141] libmachine: Parsing certificate...
	I0816 05:42:33.968880    9730 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:42:34.120296    9730 main.go:141] libmachine: Creating SSH key...
	I0816 05:42:34.330699    9730 main.go:141] libmachine: Creating Disk image...
	I0816 05:42:34.330708    9730 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:42:34.330902    9730 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/flannel-998000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/flannel-998000/disk.qcow2
	I0816 05:42:34.341419    9730 main.go:141] libmachine: STDOUT: 
	I0816 05:42:34.341438    9730 main.go:141] libmachine: STDERR: 
	I0816 05:42:34.341482    9730 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/flannel-998000/disk.qcow2 +20000M
	I0816 05:42:34.349419    9730 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:42:34.349435    9730 main.go:141] libmachine: STDERR: 
	I0816 05:42:34.349450    9730 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/flannel-998000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/flannel-998000/disk.qcow2
	I0816 05:42:34.349454    9730 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:42:34.349476    9730 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:42:34.349504    9730 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/flannel-998000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/flannel-998000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/flannel-998000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:18:f9:15:20:9c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/flannel-998000/disk.qcow2
	I0816 05:42:34.351104    9730 main.go:141] libmachine: STDOUT: 
	I0816 05:42:34.351117    9730 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:42:34.351135    9730 client.go:171] duration metric: took 382.831792ms to LocalClient.Create
	I0816 05:42:36.353309    9730 start.go:128] duration metric: took 2.407759041s to createHost
	I0816 05:42:36.353386    9730 start.go:83] releasing machines lock for "flannel-998000", held for 2.407910292s
	W0816 05:42:36.353477    9730 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:42:36.360905    9730 out.go:177] * Deleting "flannel-998000" in qemu2 ...
	W0816 05:42:36.388940    9730 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:42:36.388971    9730 start.go:729] Will try again in 5 seconds ...
	I0816 05:42:41.391074    9730 start.go:360] acquireMachinesLock for flannel-998000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:42:41.391264    9730 start.go:364] duration metric: took 153.208µs to acquireMachinesLock for "flannel-998000"
	I0816 05:42:41.391291    9730 start.go:93] Provisioning new machine with config: &{Name:flannel-998000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:flannel-998000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:42:41.391334    9730 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:42:41.399536    9730 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0816 05:42:41.414959    9730 start.go:159] libmachine.API.Create for "flannel-998000" (driver="qemu2")
	I0816 05:42:41.414986    9730 client.go:168] LocalClient.Create starting
	I0816 05:42:41.415048    9730 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:42:41.415088    9730 main.go:141] libmachine: Decoding PEM data...
	I0816 05:42:41.415096    9730 main.go:141] libmachine: Parsing certificate...
	I0816 05:42:41.415129    9730 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:42:41.415151    9730 main.go:141] libmachine: Decoding PEM data...
	I0816 05:42:41.415156    9730 main.go:141] libmachine: Parsing certificate...
	I0816 05:42:41.415530    9730 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:42:41.566604    9730 main.go:141] libmachine: Creating SSH key...
	I0816 05:42:41.686129    9730 main.go:141] libmachine: Creating Disk image...
	I0816 05:42:41.686138    9730 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:42:41.686313    9730 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/flannel-998000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/flannel-998000/disk.qcow2
	I0816 05:42:41.695982    9730 main.go:141] libmachine: STDOUT: 
	I0816 05:42:41.695999    9730 main.go:141] libmachine: STDERR: 
	I0816 05:42:41.696054    9730 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/flannel-998000/disk.qcow2 +20000M
	I0816 05:42:41.704067    9730 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:42:41.704083    9730 main.go:141] libmachine: STDERR: 
	I0816 05:42:41.704093    9730 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/flannel-998000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/flannel-998000/disk.qcow2
	I0816 05:42:41.704098    9730 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:42:41.704109    9730 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:42:41.704138    9730 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/flannel-998000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/flannel-998000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/flannel-998000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:4b:72:b7:65:fe -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/flannel-998000/disk.qcow2
	I0816 05:42:41.705816    9730 main.go:141] libmachine: STDOUT: 
	I0816 05:42:41.705831    9730 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:42:41.705843    9730 client.go:171] duration metric: took 290.859708ms to LocalClient.Create
	I0816 05:42:43.707917    9730 start.go:128] duration metric: took 2.316603083s to createHost
	I0816 05:42:43.707964    9730 start.go:83] releasing machines lock for "flannel-998000", held for 2.316725833s
	W0816 05:42:43.708119    9730 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-998000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-998000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:42:43.716441    9730 out.go:201] 
	W0816 05:42:43.721515    9730 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:42:43.721535    9730 out.go:270] * 
	* 
	W0816 05:42:43.722618    9730 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:42:43.735409    9730 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-998000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-998000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.711381125s)

                                                
                                                
-- stdout --
	* [bridge-998000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-998000" primary control-plane node in "bridge-998000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-998000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:42:46.058702    9848 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:42:46.058842    9848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:42:46.058845    9848 out.go:358] Setting ErrFile to fd 2...
	I0816 05:42:46.058848    9848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:42:46.058978    9848 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:42:46.060062    9848 out.go:352] Setting JSON to false
	I0816 05:42:46.076751    9848 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6135,"bootTime":1723806031,"procs":502,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0816 05:42:46.076831    9848 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:42:46.083377    9848 out.go:177] * [bridge-998000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 05:42:46.091366    9848 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:42:46.091395    9848 notify.go:220] Checking for updates...
	I0816 05:42:46.099298    9848 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:42:46.102375    9848 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 05:42:46.106327    9848 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:42:46.109358    9848 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	I0816 05:42:46.112405    9848 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 05:42:46.115689    9848 config.go:182] Loaded profile config "multinode-569000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:42:46.115756    9848 config.go:182] Loaded profile config "stopped-upgrade-972000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0816 05:42:46.115797    9848 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:42:46.119289    9848 out.go:177] * Using the qemu2 driver based on user configuration
	I0816 05:42:46.125244    9848 start.go:297] selected driver: qemu2
	I0816 05:42:46.125250    9848 start.go:901] validating driver "qemu2" against <nil>
	I0816 05:42:46.125256    9848 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:42:46.127444    9848 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 05:42:46.131367    9848 out.go:177] * Automatically selected the socket_vmnet network
	I0816 05:42:46.134436    9848 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 05:42:46.134456    9848 cni.go:84] Creating CNI manager for "bridge"
	I0816 05:42:46.134460    9848 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 05:42:46.134506    9848 start.go:340] cluster config:
	{Name:bridge-998000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:bridge-998000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:42:46.138090    9848 iso.go:125] acquiring lock: {Name:mkee7fdae783c25a15c40888f5bdc01a171155d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:42:46.146338    9848 out.go:177] * Starting "bridge-998000" primary control-plane node in "bridge-998000" cluster
	I0816 05:42:46.150206    9848 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:42:46.150224    9848 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 05:42:46.150236    9848 cache.go:56] Caching tarball of preloaded images
	I0816 05:42:46.150302    9848 preload.go:172] Found /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 05:42:46.150308    9848 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 05:42:46.150393    9848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/bridge-998000/config.json ...
	I0816 05:42:46.150405    9848 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/bridge-998000/config.json: {Name:mk8d31e29d45b9c2203b03077a53520df9cb6fac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:42:46.150690    9848 start.go:360] acquireMachinesLock for bridge-998000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:42:46.150724    9848 start.go:364] duration metric: took 28.25µs to acquireMachinesLock for "bridge-998000"
	I0816 05:42:46.150736    9848 start.go:93] Provisioning new machine with config: &{Name:bridge-998000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:bridge-998000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:42:46.150769    9848 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:42:46.157324    9848 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0816 05:42:46.174670    9848 start.go:159] libmachine.API.Create for "bridge-998000" (driver="qemu2")
	I0816 05:42:46.174705    9848 client.go:168] LocalClient.Create starting
	I0816 05:42:46.174769    9848 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:42:46.174799    9848 main.go:141] libmachine: Decoding PEM data...
	I0816 05:42:46.174816    9848 main.go:141] libmachine: Parsing certificate...
	I0816 05:42:46.174861    9848 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:42:46.174883    9848 main.go:141] libmachine: Decoding PEM data...
	I0816 05:42:46.174893    9848 main.go:141] libmachine: Parsing certificate...
	I0816 05:42:46.175307    9848 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:42:46.323545    9848 main.go:141] libmachine: Creating SSH key...
	I0816 05:42:46.370770    9848 main.go:141] libmachine: Creating Disk image...
	I0816 05:42:46.370776    9848 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:42:46.370942    9848 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/bridge-998000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/bridge-998000/disk.qcow2
	I0816 05:42:46.380075    9848 main.go:141] libmachine: STDOUT: 
	I0816 05:42:46.380095    9848 main.go:141] libmachine: STDERR: 
	I0816 05:42:46.380136    9848 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/bridge-998000/disk.qcow2 +20000M
	I0816 05:42:46.388002    9848 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:42:46.388033    9848 main.go:141] libmachine: STDERR: 
	I0816 05:42:46.388053    9848 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/bridge-998000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/bridge-998000/disk.qcow2
	I0816 05:42:46.388058    9848 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:42:46.388069    9848 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:42:46.388100    9848 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/bridge-998000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/bridge-998000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/bridge-998000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:ab:f1:12:3f:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/bridge-998000/disk.qcow2
	I0816 05:42:46.389770    9848 main.go:141] libmachine: STDOUT: 
	I0816 05:42:46.389786    9848 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:42:46.389817    9848 client.go:171] duration metric: took 215.110584ms to LocalClient.Create
	I0816 05:42:48.392002    9848 start.go:128] duration metric: took 2.241237583s to createHost
	I0816 05:42:48.392068    9848 start.go:83] releasing machines lock for "bridge-998000", held for 2.241372167s
	W0816 05:42:48.392154    9848 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:42:48.403460    9848 out.go:177] * Deleting "bridge-998000" in qemu2 ...
	W0816 05:42:48.434443    9848 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:42:48.434476    9848 start.go:729] Will try again in 5 seconds ...
	I0816 05:42:53.436584    9848 start.go:360] acquireMachinesLock for bridge-998000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:42:53.436801    9848 start.go:364] duration metric: took 180.708µs to acquireMachinesLock for "bridge-998000"
	I0816 05:42:53.436833    9848 start.go:93] Provisioning new machine with config: &{Name:bridge-998000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:bridge-998000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:42:53.436931    9848 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:42:53.454167    9848 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0816 05:42:53.475824    9848 start.go:159] libmachine.API.Create for "bridge-998000" (driver="qemu2")
	I0816 05:42:53.475865    9848 client.go:168] LocalClient.Create starting
	I0816 05:42:53.475939    9848 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:42:53.475992    9848 main.go:141] libmachine: Decoding PEM data...
	I0816 05:42:53.476005    9848 main.go:141] libmachine: Parsing certificate...
	I0816 05:42:53.476062    9848 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:42:53.476088    9848 main.go:141] libmachine: Decoding PEM data...
	I0816 05:42:53.476096    9848 main.go:141] libmachine: Parsing certificate...
	I0816 05:42:53.476575    9848 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:42:53.628781    9848 main.go:141] libmachine: Creating SSH key...
	I0816 05:42:53.685857    9848 main.go:141] libmachine: Creating Disk image...
	I0816 05:42:53.685866    9848 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:42:53.686063    9848 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/bridge-998000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/bridge-998000/disk.qcow2
	I0816 05:42:53.695420    9848 main.go:141] libmachine: STDOUT: 
	I0816 05:42:53.695442    9848 main.go:141] libmachine: STDERR: 
	I0816 05:42:53.695490    9848 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/bridge-998000/disk.qcow2 +20000M
	I0816 05:42:53.703730    9848 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:42:53.703746    9848 main.go:141] libmachine: STDERR: 
	I0816 05:42:53.703755    9848 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/bridge-998000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/bridge-998000/disk.qcow2
	I0816 05:42:53.703760    9848 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:42:53.703772    9848 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:42:53.703809    9848 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/bridge-998000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/bridge-998000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/bridge-998000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:84:c1:07:ed:91 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/bridge-998000/disk.qcow2
	I0816 05:42:53.705424    9848 main.go:141] libmachine: STDOUT: 
	I0816 05:42:53.705443    9848 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:42:53.705456    9848 client.go:171] duration metric: took 229.591833ms to LocalClient.Create
	I0816 05:42:55.707187    9848 start.go:128] duration metric: took 2.270272s to createHost
	I0816 05:42:55.707224    9848 start.go:83] releasing machines lock for "bridge-998000", held for 2.270449167s
	W0816 05:42:55.707370    9848 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-998000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-998000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:42:55.715496    9848 out.go:201] 
	W0816 05:42:55.721580    9848 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:42:55.721586    9848 out.go:270] * 
	* 
	W0816 05:42:55.722340    9848 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:42:55.732539    9848 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-998000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-998000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.897753875s)

                                                
                                                
-- stdout --
	* [kubenet-998000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-998000" primary control-plane node in "kubenet-998000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-998000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:42:57.955246    9960 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:42:57.955386    9960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:42:57.955390    9960 out.go:358] Setting ErrFile to fd 2...
	I0816 05:42:57.955393    9960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:42:57.955534    9960 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:42:57.956746    9960 out.go:352] Setting JSON to false
	I0816 05:42:57.974921    9960 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6146,"bootTime":1723806031,"procs":505,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0816 05:42:57.974997    9960 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:42:57.980389    9960 out.go:177] * [kubenet-998000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 05:42:57.988408    9960 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:42:57.988447    9960 notify.go:220] Checking for updates...
	I0816 05:42:57.995290    9960 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:42:57.998356    9960 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 05:42:58.001376    9960 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:42:58.004345    9960 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	I0816 05:42:58.007413    9960 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 05:42:58.010775    9960 config.go:182] Loaded profile config "multinode-569000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:42:58.010844    9960 config.go:182] Loaded profile config "stopped-upgrade-972000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0816 05:42:58.010881    9960 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:42:58.014326    9960 out.go:177] * Using the qemu2 driver based on user configuration
	I0816 05:42:58.021400    9960 start.go:297] selected driver: qemu2
	I0816 05:42:58.021406    9960 start.go:901] validating driver "qemu2" against <nil>
	I0816 05:42:58.021413    9960 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:42:58.023674    9960 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 05:42:58.027377    9960 out.go:177] * Automatically selected the socket_vmnet network
	I0816 05:42:58.030497    9960 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 05:42:58.030540    9960 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0816 05:42:58.030568    9960 start.go:340] cluster config:
	{Name:kubenet-998000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubenet-998000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:42:58.033946    9960 iso.go:125] acquiring lock: {Name:mkee7fdae783c25a15c40888f5bdc01a171155d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:42:58.037262    9960 out.go:177] * Starting "kubenet-998000" primary control-plane node in "kubenet-998000" cluster
	I0816 05:42:58.045395    9960 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:42:58.045413    9960 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 05:42:58.045426    9960 cache.go:56] Caching tarball of preloaded images
	I0816 05:42:58.045521    9960 preload.go:172] Found /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 05:42:58.045527    9960 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 05:42:58.045600    9960 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/kubenet-998000/config.json ...
	I0816 05:42:58.045611    9960 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/kubenet-998000/config.json: {Name:mk2cbecea171a983008a51354db2c01bd5c2c1ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:42:58.045832    9960 start.go:360] acquireMachinesLock for kubenet-998000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:42:58.045864    9960 start.go:364] duration metric: took 25.625µs to acquireMachinesLock for "kubenet-998000"
	I0816 05:42:58.045876    9960 start.go:93] Provisioning new machine with config: &{Name:kubenet-998000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kubenet-998000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:42:58.045908    9960 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:42:58.054391    9960 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0816 05:42:58.070755    9960 start.go:159] libmachine.API.Create for "kubenet-998000" (driver="qemu2")
	I0816 05:42:58.070788    9960 client.go:168] LocalClient.Create starting
	I0816 05:42:58.070846    9960 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:42:58.070876    9960 main.go:141] libmachine: Decoding PEM data...
	I0816 05:42:58.070889    9960 main.go:141] libmachine: Parsing certificate...
	I0816 05:42:58.070926    9960 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:42:58.070956    9960 main.go:141] libmachine: Decoding PEM data...
	I0816 05:42:58.070968    9960 main.go:141] libmachine: Parsing certificate...
	I0816 05:42:58.071417    9960 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:42:58.264728    9960 main.go:141] libmachine: Creating SSH key...
	I0816 05:42:58.350041    9960 main.go:141] libmachine: Creating Disk image...
	I0816 05:42:58.350056    9960 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:42:58.350235    9960 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kubenet-998000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kubenet-998000/disk.qcow2
	I0816 05:42:58.359706    9960 main.go:141] libmachine: STDOUT: 
	I0816 05:42:58.359732    9960 main.go:141] libmachine: STDERR: 
	I0816 05:42:58.359782    9960 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kubenet-998000/disk.qcow2 +20000M
	I0816 05:42:58.367990    9960 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:42:58.368006    9960 main.go:141] libmachine: STDERR: 
	I0816 05:42:58.368029    9960 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kubenet-998000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kubenet-998000/disk.qcow2
	I0816 05:42:58.368033    9960 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:42:58.368048    9960 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:42:58.368076    9960 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kubenet-998000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kubenet-998000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kubenet-998000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:5d:ab:d2:a7:80 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kubenet-998000/disk.qcow2
	I0816 05:42:58.369744    9960 main.go:141] libmachine: STDOUT: 
	I0816 05:42:58.369758    9960 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:42:58.369781    9960 client.go:171] duration metric: took 298.993666ms to LocalClient.Create
	I0816 05:43:00.371935    9960 start.go:128] duration metric: took 2.326039708s to createHost
	I0816 05:43:00.372020    9960 start.go:83] releasing machines lock for "kubenet-998000", held for 2.326184375s
	W0816 05:43:00.372070    9960 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:43:00.379253    9960 out.go:177] * Deleting "kubenet-998000" in qemu2 ...
	W0816 05:43:00.401778    9960 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:43:00.401807    9960 start.go:729] Will try again in 5 seconds ...
	I0816 05:43:05.403894    9960 start.go:360] acquireMachinesLock for kubenet-998000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:43:05.404064    9960 start.go:364] duration metric: took 142.417µs to acquireMachinesLock for "kubenet-998000"
	I0816 05:43:05.404099    9960 start.go:93] Provisioning new machine with config: &{Name:kubenet-998000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kubenet-998000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:43:05.404141    9960 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:43:05.412459    9960 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0816 05:43:05.428532    9960 start.go:159] libmachine.API.Create for "kubenet-998000" (driver="qemu2")
	I0816 05:43:05.428587    9960 client.go:168] LocalClient.Create starting
	I0816 05:43:05.428649    9960 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:43:05.428687    9960 main.go:141] libmachine: Decoding PEM data...
	I0816 05:43:05.428696    9960 main.go:141] libmachine: Parsing certificate...
	I0816 05:43:05.428732    9960 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:43:05.428754    9960 main.go:141] libmachine: Decoding PEM data...
	I0816 05:43:05.428761    9960 main.go:141] libmachine: Parsing certificate...
	I0816 05:43:05.429883    9960 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:43:05.612739    9960 main.go:141] libmachine: Creating SSH key...
	I0816 05:43:05.746277    9960 main.go:141] libmachine: Creating Disk image...
	I0816 05:43:05.746284    9960 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:43:05.746458    9960 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kubenet-998000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kubenet-998000/disk.qcow2
	I0816 05:43:05.756094    9960 main.go:141] libmachine: STDOUT: 
	I0816 05:43:05.756123    9960 main.go:141] libmachine: STDERR: 
	I0816 05:43:05.756181    9960 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kubenet-998000/disk.qcow2 +20000M
	I0816 05:43:05.764370    9960 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:43:05.764401    9960 main.go:141] libmachine: STDERR: 
	I0816 05:43:05.764414    9960 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kubenet-998000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kubenet-998000/disk.qcow2
	I0816 05:43:05.764419    9960 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:43:05.764431    9960 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:43:05.764457    9960 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kubenet-998000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kubenet-998000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kubenet-998000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:ea:67:03:75:e4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/kubenet-998000/disk.qcow2
	I0816 05:43:05.766330    9960 main.go:141] libmachine: STDOUT: 
	I0816 05:43:05.766362    9960 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:43:05.766377    9960 client.go:171] duration metric: took 337.79075ms to LocalClient.Create
	I0816 05:43:07.768441    9960 start.go:128] duration metric: took 2.36432525s to createHost
	I0816 05:43:07.768474    9960 start.go:83] releasing machines lock for "kubenet-998000", held for 2.364443917s
	W0816 05:43:07.768631    9960 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-998000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-998000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:43:07.786824    9960 out.go:201] 
	W0816 05:43:07.798557    9960 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:43:07.798568    9960 out.go:270] * 
	* 
	W0816 05:43:07.799426    9960 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:43:07.812870    9960 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-861000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-861000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (10.169830833s)

                                                
                                                
-- stdout --
	* [old-k8s-version-861000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-861000" primary control-plane node in "old-k8s-version-861000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-861000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:43:10.003405   10073 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:43:10.003555   10073 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:43:10.003558   10073 out.go:358] Setting ErrFile to fd 2...
	I0816 05:43:10.003560   10073 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:43:10.003686   10073 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:43:10.004784   10073 out.go:352] Setting JSON to false
	I0816 05:43:10.021104   10073 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6159,"bootTime":1723806031,"procs":502,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0816 05:43:10.021173   10073 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:43:10.027863   10073 out.go:177] * [old-k8s-version-861000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 05:43:10.035874   10073 notify.go:220] Checking for updates...
	I0816 05:43:10.038870   10073 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:43:10.041948   10073 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:43:10.045077   10073 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 05:43:10.048821   10073 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:43:10.051889   10073 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	I0816 05:43:10.054917   10073 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 05:43:10.058161   10073 config.go:182] Loaded profile config "multinode-569000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:43:10.058227   10073 config.go:182] Loaded profile config "stopped-upgrade-972000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0816 05:43:10.058276   10073 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:43:10.062931   10073 out.go:177] * Using the qemu2 driver based on user configuration
	I0816 05:43:10.069909   10073 start.go:297] selected driver: qemu2
	I0816 05:43:10.069919   10073 start.go:901] validating driver "qemu2" against <nil>
	I0816 05:43:10.069927   10073 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:43:10.072316   10073 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 05:43:10.076909   10073 out.go:177] * Automatically selected the socket_vmnet network
	I0816 05:43:10.080033   10073 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 05:43:10.080060   10073 cni.go:84] Creating CNI manager for ""
	I0816 05:43:10.080069   10073 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0816 05:43:10.080094   10073 start.go:340] cluster config:
	{Name:old-k8s-version-861000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-861000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:43:10.083723   10073 iso.go:125] acquiring lock: {Name:mkee7fdae783c25a15c40888f5bdc01a171155d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:43:10.091891   10073 out.go:177] * Starting "old-k8s-version-861000" primary control-plane node in "old-k8s-version-861000" cluster
	I0816 05:43:10.095896   10073 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0816 05:43:10.095917   10073 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0816 05:43:10.095926   10073 cache.go:56] Caching tarball of preloaded images
	I0816 05:43:10.095989   10073 preload.go:172] Found /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 05:43:10.095996   10073 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0816 05:43:10.096084   10073 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/old-k8s-version-861000/config.json ...
	I0816 05:43:10.096098   10073 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/old-k8s-version-861000/config.json: {Name:mk0c35a3eeab83d9d24ffe3d2ec5eb8d2ee5a7d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:43:10.096358   10073 start.go:360] acquireMachinesLock for old-k8s-version-861000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:43:10.096391   10073 start.go:364] duration metric: took 25.125µs to acquireMachinesLock for "old-k8s-version-861000"
	I0816 05:43:10.096402   10073 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-861000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-861000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:43:10.096427   10073 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:43:10.099896   10073 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 05:43:10.114724   10073 start.go:159] libmachine.API.Create for "old-k8s-version-861000" (driver="qemu2")
	I0816 05:43:10.114747   10073 client.go:168] LocalClient.Create starting
	I0816 05:43:10.114810   10073 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:43:10.114841   10073 main.go:141] libmachine: Decoding PEM data...
	I0816 05:43:10.114855   10073 main.go:141] libmachine: Parsing certificate...
	I0816 05:43:10.114892   10073 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:43:10.114914   10073 main.go:141] libmachine: Decoding PEM data...
	I0816 05:43:10.114920   10073 main.go:141] libmachine: Parsing certificate...
	I0816 05:43:10.115343   10073 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:43:10.278364   10073 main.go:141] libmachine: Creating SSH key...
	I0816 05:43:10.559239   10073 main.go:141] libmachine: Creating Disk image...
	I0816 05:43:10.559250   10073 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:43:10.559644   10073 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/old-k8s-version-861000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/old-k8s-version-861000/disk.qcow2
	I0816 05:43:10.569409   10073 main.go:141] libmachine: STDOUT: 
	I0816 05:43:10.569433   10073 main.go:141] libmachine: STDERR: 
	I0816 05:43:10.569488   10073 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/old-k8s-version-861000/disk.qcow2 +20000M
	I0816 05:43:10.577437   10073 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:43:10.577451   10073 main.go:141] libmachine: STDERR: 
	I0816 05:43:10.577470   10073 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/old-k8s-version-861000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/old-k8s-version-861000/disk.qcow2
	I0816 05:43:10.577475   10073 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:43:10.577488   10073 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:43:10.577520   10073 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/old-k8s-version-861000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/old-k8s-version-861000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/old-k8s-version-861000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:a4:2e:b7:88:78 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/old-k8s-version-861000/disk.qcow2
	I0816 05:43:10.579114   10073 main.go:141] libmachine: STDOUT: 
	I0816 05:43:10.579132   10073 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:43:10.579152   10073 client.go:171] duration metric: took 464.408375ms to LocalClient.Create
	I0816 05:43:12.581338   10073 start.go:128] duration metric: took 2.484922875s to createHost
	I0816 05:43:12.581439   10073 start.go:83] releasing machines lock for "old-k8s-version-861000", held for 2.48508025s
	W0816 05:43:12.581542   10073 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:43:12.589025   10073 out.go:177] * Deleting "old-k8s-version-861000" in qemu2 ...
	W0816 05:43:12.616409   10073 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:43:12.616445   10073 start.go:729] Will try again in 5 seconds ...
	I0816 05:43:17.616659   10073 start.go:360] acquireMachinesLock for old-k8s-version-861000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:43:17.617241   10073 start.go:364] duration metric: took 467.125µs to acquireMachinesLock for "old-k8s-version-861000"
	I0816 05:43:17.617426   10073 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-861000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-861000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:43:17.617697   10073 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:43:17.626489   10073 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 05:43:17.672660   10073 start.go:159] libmachine.API.Create for "old-k8s-version-861000" (driver="qemu2")
	I0816 05:43:17.672718   10073 client.go:168] LocalClient.Create starting
	I0816 05:43:17.672851   10073 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:43:17.672913   10073 main.go:141] libmachine: Decoding PEM data...
	I0816 05:43:17.672930   10073 main.go:141] libmachine: Parsing certificate...
	I0816 05:43:17.672987   10073 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:43:17.673032   10073 main.go:141] libmachine: Decoding PEM data...
	I0816 05:43:17.673043   10073 main.go:141] libmachine: Parsing certificate...
	I0816 05:43:17.673581   10073 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:43:17.835227   10073 main.go:141] libmachine: Creating SSH key...
	I0816 05:43:18.080289   10073 main.go:141] libmachine: Creating Disk image...
	I0816 05:43:18.080303   10073 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:43:18.080499   10073 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/old-k8s-version-861000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/old-k8s-version-861000/disk.qcow2
	I0816 05:43:18.090302   10073 main.go:141] libmachine: STDOUT: 
	I0816 05:43:18.090324   10073 main.go:141] libmachine: STDERR: 
	I0816 05:43:18.090374   10073 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/old-k8s-version-861000/disk.qcow2 +20000M
	I0816 05:43:18.098392   10073 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:43:18.098408   10073 main.go:141] libmachine: STDERR: 
	I0816 05:43:18.098420   10073 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/old-k8s-version-861000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/old-k8s-version-861000/disk.qcow2
	I0816 05:43:18.098425   10073 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:43:18.098437   10073 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:43:18.098473   10073 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/old-k8s-version-861000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/old-k8s-version-861000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/old-k8s-version-861000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:59:28:9b:fe:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/old-k8s-version-861000/disk.qcow2
	I0816 05:43:18.100136   10073 main.go:141] libmachine: STDOUT: 
	I0816 05:43:18.100151   10073 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:43:18.100164   10073 client.go:171] duration metric: took 427.44575ms to LocalClient.Create
	I0816 05:43:20.102354   10073 start.go:128] duration metric: took 2.484650209s to createHost
	I0816 05:43:20.102462   10073 start.go:83] releasing machines lock for "old-k8s-version-861000", held for 2.485237416s
	W0816 05:43:20.102924   10073 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-861000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-861000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:43:20.114433   10073 out.go:201] 
	W0816 05:43:20.118593   10073 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:43:20.118628   10073 out.go:270] * 
	* 
	W0816 05:43:20.121609   10073 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:43:20.130450   10073 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-861000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-861000 -n old-k8s-version-861000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-861000 -n old-k8s-version-861000: exit status 7 (65.913584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-861000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-861000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-861000 create -f testdata/busybox.yaml: exit status 1 (30.408083ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-861000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-861000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-861000 -n old-k8s-version-861000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-861000 -n old-k8s-version-861000: exit status 7 (29.237583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-861000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-861000 -n old-k8s-version-861000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-861000 -n old-k8s-version-861000: exit status 7 (29.596625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-861000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-861000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-861000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-861000 describe deploy/metrics-server -n kube-system: exit status 1 (27.402667ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-861000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-861000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-861000 -n old-k8s-version-861000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-861000 -n old-k8s-version-861000: exit status 7 (30.57175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-861000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-861000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-861000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.184038709s)

                                                
                                                
-- stdout --
	* [old-k8s-version-861000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-861000" primary control-plane node in "old-k8s-version-861000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-861000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-861000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:43:23.944523   10125 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:43:23.944661   10125 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:43:23.944665   10125 out.go:358] Setting ErrFile to fd 2...
	I0816 05:43:23.944667   10125 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:43:23.944796   10125 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:43:23.945854   10125 out.go:352] Setting JSON to false
	I0816 05:43:23.962661   10125 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6172,"bootTime":1723806031,"procs":502,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0816 05:43:23.962736   10125 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:43:23.967988   10125 out.go:177] * [old-k8s-version-861000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 05:43:23.975962   10125 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:43:23.976059   10125 notify.go:220] Checking for updates...
	I0816 05:43:23.982971   10125 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:43:23.986905   10125 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 05:43:23.989938   10125 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:43:23.992853   10125 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	I0816 05:43:23.995922   10125 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 05:43:23.999185   10125 config.go:182] Loaded profile config "old-k8s-version-861000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0816 05:43:24.002806   10125 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0816 05:43:24.005925   10125 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:43:24.008994   10125 out.go:177] * Using the qemu2 driver based on existing profile
	I0816 05:43:24.015926   10125 start.go:297] selected driver: qemu2
	I0816 05:43:24.015931   10125 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-861000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-861000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:43:24.015976   10125 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:43:24.018200   10125 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 05:43:24.018242   10125 cni.go:84] Creating CNI manager for ""
	I0816 05:43:24.018248   10125 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0816 05:43:24.018275   10125 start.go:340] cluster config:
	{Name:old-k8s-version-861000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-861000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:43:24.021680   10125 iso.go:125] acquiring lock: {Name:mkee7fdae783c25a15c40888f5bdc01a171155d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:43:24.029945   10125 out.go:177] * Starting "old-k8s-version-861000" primary control-plane node in "old-k8s-version-861000" cluster
	I0816 05:43:24.033912   10125 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0816 05:43:24.033928   10125 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0816 05:43:24.033935   10125 cache.go:56] Caching tarball of preloaded images
	I0816 05:43:24.033998   10125 preload.go:172] Found /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 05:43:24.034003   10125 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0816 05:43:24.034058   10125 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/old-k8s-version-861000/config.json ...
	I0816 05:43:24.034341   10125 start.go:360] acquireMachinesLock for old-k8s-version-861000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:43:24.034367   10125 start.go:364] duration metric: took 20.209µs to acquireMachinesLock for "old-k8s-version-861000"
	I0816 05:43:24.034377   10125 start.go:96] Skipping create...Using existing machine configuration
	I0816 05:43:24.034381   10125 fix.go:54] fixHost starting: 
	I0816 05:43:24.034495   10125 fix.go:112] recreateIfNeeded on old-k8s-version-861000: state=Stopped err=<nil>
	W0816 05:43:24.034502   10125 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 05:43:24.037923   10125 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-861000" ...
	I0816 05:43:24.045899   10125 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:43:24.045930   10125 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/old-k8s-version-861000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/old-k8s-version-861000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/old-k8s-version-861000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:59:28:9b:fe:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/old-k8s-version-861000/disk.qcow2
	I0816 05:43:24.047904   10125 main.go:141] libmachine: STDOUT: 
	I0816 05:43:24.047923   10125 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:43:24.047954   10125 fix.go:56] duration metric: took 13.572ms for fixHost
	I0816 05:43:24.047957   10125 start.go:83] releasing machines lock for "old-k8s-version-861000", held for 13.586291ms
	W0816 05:43:24.047964   10125 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:43:24.047993   10125 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:43:24.047997   10125 start.go:729] Will try again in 5 seconds ...
	I0816 05:43:29.050096   10125 start.go:360] acquireMachinesLock for old-k8s-version-861000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:43:29.050369   10125 start.go:364] duration metric: took 220.084µs to acquireMachinesLock for "old-k8s-version-861000"
	I0816 05:43:29.050422   10125 start.go:96] Skipping create...Using existing machine configuration
	I0816 05:43:29.050434   10125 fix.go:54] fixHost starting: 
	I0816 05:43:29.050880   10125 fix.go:112] recreateIfNeeded on old-k8s-version-861000: state=Stopped err=<nil>
	W0816 05:43:29.050896   10125 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 05:43:29.058206   10125 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-861000" ...
	I0816 05:43:29.061165   10125 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:43:29.061289   10125 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/old-k8s-version-861000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/old-k8s-version-861000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/old-k8s-version-861000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:59:28:9b:fe:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/old-k8s-version-861000/disk.qcow2
	I0816 05:43:29.066214   10125 main.go:141] libmachine: STDOUT: 
	I0816 05:43:29.066254   10125 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:43:29.066311   10125 fix.go:56] duration metric: took 15.880084ms for fixHost
	I0816 05:43:29.066321   10125 start.go:83] releasing machines lock for "old-k8s-version-861000", held for 15.936458ms
	W0816 05:43:29.066405   10125 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-861000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-861000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:43:29.074075   10125 out.go:201] 
	W0816 05:43:29.078158   10125 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:43:29.078171   10125 out.go:270] * 
	* 
	W0816 05:43:29.079544   10125 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:43:29.089009   10125 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-861000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-861000 -n old-k8s-version-861000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-861000 -n old-k8s-version-861000: exit status 7 (54.262791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-861000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-861000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-861000 -n old-k8s-version-861000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-861000 -n old-k8s-version-861000: exit status 7 (32.739ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-861000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-861000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-861000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-861000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.551167ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-861000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-861000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-861000 -n old-k8s-version-861000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-861000 -n old-k8s-version-861000: exit status 7 (31.757959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-861000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-861000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-861000 -n old-k8s-version-861000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-861000 -n old-k8s-version-861000: exit status 7 (29.618042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-861000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-861000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-861000 --alsologtostderr -v=1: exit status 83 (41.72575ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-861000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-861000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:43:29.348671   10144 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:43:29.349095   10144 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:43:29.349100   10144 out.go:358] Setting ErrFile to fd 2...
	I0816 05:43:29.349103   10144 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:43:29.349233   10144 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:43:29.349448   10144 out.go:352] Setting JSON to false
	I0816 05:43:29.349456   10144 mustload.go:65] Loading cluster: old-k8s-version-861000
	I0816 05:43:29.349629   10144 config.go:182] Loaded profile config "old-k8s-version-861000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0816 05:43:29.353653   10144 out.go:177] * The control-plane node old-k8s-version-861000 host is not running: state=Stopped
	I0816 05:43:29.357718   10144 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-861000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-861000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-861000 -n old-k8s-version-861000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-861000 -n old-k8s-version-861000: exit status 7 (30.095666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-861000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-861000 -n old-k8s-version-861000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-861000 -n old-k8s-version-861000: exit status 7 (29.093583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-861000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-023000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-023000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.966138042s)

                                                
                                                
-- stdout --
	* [embed-certs-023000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-023000" primary control-plane node in "embed-certs-023000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-023000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:43:29.669824   10161 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:43:29.669976   10161 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:43:29.669980   10161 out.go:358] Setting ErrFile to fd 2...
	I0816 05:43:29.669982   10161 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:43:29.670097   10161 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:43:29.671205   10161 out.go:352] Setting JSON to false
	I0816 05:43:29.687372   10161 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6178,"bootTime":1723806031,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0816 05:43:29.687441   10161 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:43:29.691009   10161 out.go:177] * [embed-certs-023000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 05:43:29.698052   10161 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:43:29.698072   10161 notify.go:220] Checking for updates...
	I0816 05:43:29.706048   10161 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:43:29.709078   10161 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 05:43:29.712060   10161 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:43:29.715055   10161 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	I0816 05:43:29.718071   10161 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 05:43:29.721335   10161 config.go:182] Loaded profile config "multinode-569000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:43:29.721391   10161 config.go:182] Loaded profile config "stopped-upgrade-972000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0816 05:43:29.721433   10161 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:43:29.725978   10161 out.go:177] * Using the qemu2 driver based on user configuration
	I0816 05:43:29.731938   10161 start.go:297] selected driver: qemu2
	I0816 05:43:29.731944   10161 start.go:901] validating driver "qemu2" against <nil>
	I0816 05:43:29.731949   10161 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:43:29.734044   10161 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 05:43:29.737070   10161 out.go:177] * Automatically selected the socket_vmnet network
	I0816 05:43:29.741123   10161 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 05:43:29.741139   10161 cni.go:84] Creating CNI manager for ""
	I0816 05:43:29.741154   10161 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 05:43:29.741158   10161 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 05:43:29.741179   10161 start.go:340] cluster config:
	{Name:embed-certs-023000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-023000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:43:29.744669   10161 iso.go:125] acquiring lock: {Name:mkee7fdae783c25a15c40888f5bdc01a171155d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:43:29.752092   10161 out.go:177] * Starting "embed-certs-023000" primary control-plane node in "embed-certs-023000" cluster
	I0816 05:43:29.756007   10161 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:43:29.756023   10161 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 05:43:29.756031   10161 cache.go:56] Caching tarball of preloaded images
	I0816 05:43:29.756110   10161 preload.go:172] Found /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 05:43:29.756115   10161 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 05:43:29.756189   10161 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/embed-certs-023000/config.json ...
	I0816 05:43:29.756200   10161 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/embed-certs-023000/config.json: {Name:mk07fec27148208dd50e9a2ec27820600bd33692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:43:29.756420   10161 start.go:360] acquireMachinesLock for embed-certs-023000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:43:29.756453   10161 start.go:364] duration metric: took 27.875µs to acquireMachinesLock for "embed-certs-023000"
	I0816 05:43:29.756466   10161 start.go:93] Provisioning new machine with config: &{Name:embed-certs-023000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:embed-certs-023000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:43:29.756504   10161 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:43:29.764092   10161 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 05:43:29.779860   10161 start.go:159] libmachine.API.Create for "embed-certs-023000" (driver="qemu2")
	I0816 05:43:29.779890   10161 client.go:168] LocalClient.Create starting
	I0816 05:43:29.779963   10161 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:43:29.779999   10161 main.go:141] libmachine: Decoding PEM data...
	I0816 05:43:29.780012   10161 main.go:141] libmachine: Parsing certificate...
	I0816 05:43:29.780057   10161 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:43:29.780079   10161 main.go:141] libmachine: Decoding PEM data...
	I0816 05:43:29.780086   10161 main.go:141] libmachine: Parsing certificate...
	I0816 05:43:29.780491   10161 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:43:29.933453   10161 main.go:141] libmachine: Creating SSH key...
	I0816 05:43:30.034192   10161 main.go:141] libmachine: Creating Disk image...
	I0816 05:43:30.034198   10161 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:43:30.034369   10161 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/embed-certs-023000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/embed-certs-023000/disk.qcow2
	I0816 05:43:30.043895   10161 main.go:141] libmachine: STDOUT: 
	I0816 05:43:30.043916   10161 main.go:141] libmachine: STDERR: 
	I0816 05:43:30.043978   10161 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/embed-certs-023000/disk.qcow2 +20000M
	I0816 05:43:30.052360   10161 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:43:30.052379   10161 main.go:141] libmachine: STDERR: 
	I0816 05:43:30.052397   10161 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/embed-certs-023000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/embed-certs-023000/disk.qcow2
	I0816 05:43:30.052402   10161 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:43:30.052415   10161 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:43:30.052450   10161 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/embed-certs-023000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/embed-certs-023000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/embed-certs-023000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:d8:04:4b:f3:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/embed-certs-023000/disk.qcow2
	I0816 05:43:30.054292   10161 main.go:141] libmachine: STDOUT: 
	I0816 05:43:30.054308   10161 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:43:30.054326   10161 client.go:171] duration metric: took 274.436459ms to LocalClient.Create
	I0816 05:43:32.056449   10161 start.go:128] duration metric: took 2.299970417s to createHost
	I0816 05:43:32.056479   10161 start.go:83] releasing machines lock for "embed-certs-023000", held for 2.300057875s
	W0816 05:43:32.056510   10161 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:43:32.066570   10161 out.go:177] * Deleting "embed-certs-023000" in qemu2 ...
	W0816 05:43:32.087662   10161 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:43:32.087676   10161 start.go:729] Will try again in 5 seconds ...
	I0816 05:43:37.089848   10161 start.go:360] acquireMachinesLock for embed-certs-023000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:43:37.090397   10161 start.go:364] duration metric: took 443.625µs to acquireMachinesLock for "embed-certs-023000"
	I0816 05:43:37.090525   10161 start.go:93] Provisioning new machine with config: &{Name:embed-certs-023000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:embed-certs-023000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:43:37.090792   10161 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:43:37.100379   10161 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 05:43:37.145644   10161 start.go:159] libmachine.API.Create for "embed-certs-023000" (driver="qemu2")
	I0816 05:43:37.145707   10161 client.go:168] LocalClient.Create starting
	I0816 05:43:37.145832   10161 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:43:37.145908   10161 main.go:141] libmachine: Decoding PEM data...
	I0816 05:43:37.145924   10161 main.go:141] libmachine: Parsing certificate...
	I0816 05:43:37.145991   10161 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:43:37.146036   10161 main.go:141] libmachine: Decoding PEM data...
	I0816 05:43:37.146052   10161 main.go:141] libmachine: Parsing certificate...
	I0816 05:43:37.146562   10161 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:43:37.309073   10161 main.go:141] libmachine: Creating SSH key...
	I0816 05:43:37.515049   10161 main.go:141] libmachine: Creating Disk image...
	I0816 05:43:37.515057   10161 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:43:37.515246   10161 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/embed-certs-023000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/embed-certs-023000/disk.qcow2
	I0816 05:43:37.526681   10161 main.go:141] libmachine: STDOUT: 
	I0816 05:43:37.526703   10161 main.go:141] libmachine: STDERR: 
	I0816 05:43:37.526771   10161 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/embed-certs-023000/disk.qcow2 +20000M
	I0816 05:43:37.559279   10161 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:43:37.559304   10161 main.go:141] libmachine: STDERR: 
	I0816 05:43:37.559316   10161 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/embed-certs-023000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/embed-certs-023000/disk.qcow2
	I0816 05:43:37.559321   10161 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:43:37.559328   10161 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:43:37.559369   10161 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/embed-certs-023000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/embed-certs-023000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/embed-certs-023000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:1f:2c:3a:cc:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/embed-certs-023000/disk.qcow2
	I0816 05:43:37.562005   10161 main.go:141] libmachine: STDOUT: 
	I0816 05:43:37.562024   10161 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:43:37.562038   10161 client.go:171] duration metric: took 416.332375ms to LocalClient.Create
	I0816 05:43:39.564445   10161 start.go:128] duration metric: took 2.473409417s to createHost
	I0816 05:43:39.564518   10161 start.go:83] releasing machines lock for "embed-certs-023000", held for 2.474138125s
	W0816 05:43:39.564833   10161 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-023000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-023000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:43:39.577624   10161 out.go:201] 
	W0816 05:43:39.586011   10161 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:43:39.586050   10161 out.go:270] * 
	* 
	W0816 05:43:39.588179   10161 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:43:39.598301   10161 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-023000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-023000 -n embed-certs-023000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-023000 -n embed-certs-023000: exit status 7 (48.505167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-023000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (11.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-576000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-576000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (11.758635541s)

                                                
                                                
-- stdout --
	* [no-preload-576000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-576000" primary control-plane node in "no-preload-576000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-576000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:43:37.722078   10188 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:43:37.722227   10188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:43:37.722231   10188 out.go:358] Setting ErrFile to fd 2...
	I0816 05:43:37.722233   10188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:43:37.722365   10188 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:43:37.723450   10188 out.go:352] Setting JSON to false
	I0816 05:43:37.739816   10188 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6186,"bootTime":1723806031,"procs":502,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0816 05:43:37.739883   10188 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:43:37.744504   10188 out.go:177] * [no-preload-576000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 05:43:37.751480   10188 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:43:37.751523   10188 notify.go:220] Checking for updates...
	I0816 05:43:37.758473   10188 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:43:37.762437   10188 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 05:43:37.765465   10188 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:43:37.768526   10188 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	I0816 05:43:37.771456   10188 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 05:43:37.774805   10188 config.go:182] Loaded profile config "embed-certs-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:43:37.774866   10188 config.go:182] Loaded profile config "multinode-569000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:43:37.774925   10188 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:43:37.779467   10188 out.go:177] * Using the qemu2 driver based on user configuration
	I0816 05:43:37.786476   10188 start.go:297] selected driver: qemu2
	I0816 05:43:37.786485   10188 start.go:901] validating driver "qemu2" against <nil>
	I0816 05:43:37.786493   10188 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:43:37.788999   10188 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 05:43:37.792497   10188 out.go:177] * Automatically selected the socket_vmnet network
	I0816 05:43:37.795572   10188 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 05:43:37.795607   10188 cni.go:84] Creating CNI manager for ""
	I0816 05:43:37.795613   10188 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 05:43:37.795618   10188 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 05:43:37.795652   10188 start.go:340] cluster config:
	{Name:no-preload-576000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-576000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:43:37.799469   10188 iso.go:125] acquiring lock: {Name:mkee7fdae783c25a15c40888f5bdc01a171155d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:43:37.807283   10188 out.go:177] * Starting "no-preload-576000" primary control-plane node in "no-preload-576000" cluster
	I0816 05:43:37.811395   10188 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:43:37.811490   10188 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/no-preload-576000/config.json ...
	I0816 05:43:37.811489   10188 cache.go:107] acquiring lock: {Name:mk0ee725585939851e658401112124e8d27976db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:43:37.811492   10188 cache.go:107] acquiring lock: {Name:mk329c0b5aaaf5895567c32fd5de81d3aee0d999 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:43:37.811506   10188 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/no-preload-576000/config.json: {Name:mkd2bdbace12ede8929858bff647e7d8004a526e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:43:37.811508   10188 cache.go:107] acquiring lock: {Name:mk6ba78cd053f31de333b084a8280b7fbbd3a623 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:43:37.811518   10188 cache.go:107] acquiring lock: {Name:mk461f9517e3a93523215c91aa78bfdd8c0d2b63 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:43:37.811519   10188 cache.go:107] acquiring lock: {Name:mk157234e051671ad3c50e2fb9901723312d8d1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:43:37.811531   10188 cache.go:107] acquiring lock: {Name:mkc7d0c3f160b662328e209741ea0ae0b3fa8393 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:43:37.811531   10188 cache.go:107] acquiring lock: {Name:mkc7c1aeac3cb675af946cebdce88aa9a925a2a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:43:37.811750   10188 cache.go:115] /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0816 05:43:37.811752   10188 cache.go:107] acquiring lock: {Name:mk9a8e77b0d11462130a0c07c8ac41a530de757e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:43:37.811765   10188 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 278.917µs
	I0816 05:43:37.811838   10188 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 05:43:37.811850   10188 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0816 05:43:37.811791   10188 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0816 05:43:37.811901   10188 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0816 05:43:37.811797   10188 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 05:43:37.811816   10188 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 05:43:37.812029   10188 start.go:360] acquireMachinesLock for no-preload-576000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:43:37.811825   10188 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 05:43:37.811867   10188 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 05:43:37.817675   10188 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 05:43:37.817677   10188 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 05:43:37.817743   10188 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 05:43:37.817819   10188 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0816 05:43:37.817831   10188 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0816 05:43:37.817877   10188 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 05:43:37.817996   10188 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 05:43:38.234429   10188 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0
	I0816 05:43:38.241987   10188 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0
	I0816 05:43:38.243823   10188 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0816 05:43:38.250291   10188 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0816 05:43:38.258854   10188 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0816 05:43:38.273929   10188 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0
	I0816 05:43:38.345348   10188 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0816 05:43:38.411865   10188 cache.go:157] /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0816 05:43:38.411933   10188 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 600.449333ms
	I0816 05:43:38.411965   10188 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0816 05:43:39.564666   10188 start.go:364] duration metric: took 1.752635083s to acquireMachinesLock for "no-preload-576000"
	I0816 05:43:39.564881   10188 start.go:93] Provisioning new machine with config: &{Name:no-preload-576000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:no-preload-576000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:43:39.565091   10188 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:43:39.570562   10188 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 05:43:39.623970   10188 start.go:159] libmachine.API.Create for "no-preload-576000" (driver="qemu2")
	I0816 05:43:39.624011   10188 client.go:168] LocalClient.Create starting
	I0816 05:43:39.624150   10188 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:43:39.624207   10188 main.go:141] libmachine: Decoding PEM data...
	I0816 05:43:39.624224   10188 main.go:141] libmachine: Parsing certificate...
	I0816 05:43:39.624286   10188 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:43:39.624330   10188 main.go:141] libmachine: Decoding PEM data...
	I0816 05:43:39.624344   10188 main.go:141] libmachine: Parsing certificate...
	I0816 05:43:39.624936   10188 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:43:39.787166   10188 main.go:141] libmachine: Creating SSH key...
	I0816 05:43:39.907021   10188 main.go:141] libmachine: Creating Disk image...
	I0816 05:43:39.907030   10188 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:43:39.907247   10188 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/no-preload-576000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/no-preload-576000/disk.qcow2
	I0816 05:43:39.924071   10188 main.go:141] libmachine: STDOUT: 
	I0816 05:43:39.924094   10188 main.go:141] libmachine: STDERR: 
	I0816 05:43:39.924162   10188 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/no-preload-576000/disk.qcow2 +20000M
	I0816 05:43:39.932895   10188 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:43:39.932912   10188 main.go:141] libmachine: STDERR: 
	I0816 05:43:39.932929   10188 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/no-preload-576000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/no-preload-576000/disk.qcow2
	I0816 05:43:39.932934   10188 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:43:39.932948   10188 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:43:39.932989   10188 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/no-preload-576000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/no-preload-576000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/no-preload-576000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:68:1f:99:eb:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/no-preload-576000/disk.qcow2
	I0816 05:43:39.934780   10188 main.go:141] libmachine: STDOUT: 
	I0816 05:43:39.934802   10188 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:43:39.934822   10188 client.go:171] duration metric: took 310.810792ms to LocalClient.Create
	I0816 05:43:41.409975   10188 cache.go:157] /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 exists
	I0816 05:43:41.410063   10188 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0" -> "/Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0" took 3.598618583s
	I0816 05:43:41.410122   10188 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0 -> /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 succeeded
	I0816 05:43:41.935064   10188 start.go:128] duration metric: took 2.3699595s to createHost
	I0816 05:43:41.935169   10188 start.go:83] releasing machines lock for "no-preload-576000", held for 2.370507708s
	W0816 05:43:41.935239   10188 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:43:41.943505   10188 out.go:177] * Deleting "no-preload-576000" in qemu2 ...
	W0816 05:43:41.971868   10188 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:43:41.971926   10188 start.go:729] Will try again in 5 seconds ...
	I0816 05:43:42.064668   10188 cache.go:157] /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 exists
	I0816 05:43:42.064712   10188 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0" -> "/Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0" took 4.253246208s
	I0816 05:43:42.064740   10188 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0 -> /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 succeeded
	I0816 05:43:42.103195   10188 cache.go:157] /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0816 05:43:42.103263   10188 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 4.291796583s
	I0816 05:43:42.103304   10188 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0816 05:43:42.379569   10188 cache.go:157] /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 exists
	I0816 05:43:42.379638   10188 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0" -> "/Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0" took 4.568188s
	I0816 05:43:42.379674   10188 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0 -> /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 succeeded
	I0816 05:43:43.390706   10188 cache.go:157] /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 exists
	I0816 05:43:43.390716   10188 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0" -> "/Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0" took 5.579289875s
	I0816 05:43:43.390723   10188 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0 -> /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 succeeded
	I0816 05:43:46.668183   10188 cache.go:157] /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0816 05:43:46.668260   10188 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 8.856676917s
	I0816 05:43:46.668286   10188 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0816 05:43:46.668320   10188 cache.go:87] Successfully saved all images to host disk.
	I0816 05:43:46.973961   10188 start.go:360] acquireMachinesLock for no-preload-576000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:43:46.974344   10188 start.go:364] duration metric: took 321.834µs to acquireMachinesLock for "no-preload-576000"
	I0816 05:43:46.974438   10188 start.go:93] Provisioning new machine with config: &{Name:no-preload-576000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:no-preload-576000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:43:46.974795   10188 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:43:46.981420   10188 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 05:43:47.031989   10188 start.go:159] libmachine.API.Create for "no-preload-576000" (driver="qemu2")
	I0816 05:43:47.032034   10188 client.go:168] LocalClient.Create starting
	I0816 05:43:47.032155   10188 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:43:47.032211   10188 main.go:141] libmachine: Decoding PEM data...
	I0816 05:43:47.032233   10188 main.go:141] libmachine: Parsing certificate...
	I0816 05:43:47.032303   10188 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:43:47.032345   10188 main.go:141] libmachine: Decoding PEM data...
	I0816 05:43:47.032364   10188 main.go:141] libmachine: Parsing certificate...
	I0816 05:43:47.032874   10188 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:43:47.195516   10188 main.go:141] libmachine: Creating SSH key...
	I0816 05:43:47.377874   10188 main.go:141] libmachine: Creating Disk image...
	I0816 05:43:47.377880   10188 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:43:47.378097   10188 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/no-preload-576000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/no-preload-576000/disk.qcow2
	I0816 05:43:47.387872   10188 main.go:141] libmachine: STDOUT: 
	I0816 05:43:47.387893   10188 main.go:141] libmachine: STDERR: 
	I0816 05:43:47.387952   10188 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/no-preload-576000/disk.qcow2 +20000M
	I0816 05:43:47.396063   10188 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:43:47.396077   10188 main.go:141] libmachine: STDERR: 
	I0816 05:43:47.396088   10188 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/no-preload-576000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/no-preload-576000/disk.qcow2
	I0816 05:43:47.396092   10188 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:43:47.396102   10188 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:43:47.396148   10188 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/no-preload-576000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/no-preload-576000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/no-preload-576000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:d3:8e:e7:e1:70 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/no-preload-576000/disk.qcow2
	I0816 05:43:47.397908   10188 main.go:141] libmachine: STDOUT: 
	I0816 05:43:47.397924   10188 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:43:47.397938   10188 client.go:171] duration metric: took 365.904875ms to LocalClient.Create
	I0816 05:43:49.400291   10188 start.go:128] duration metric: took 2.425447042s to createHost
	I0816 05:43:49.400358   10188 start.go:83] releasing machines lock for "no-preload-576000", held for 2.426028709s
	W0816 05:43:49.400715   10188 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-576000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-576000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:43:49.413268   10188 out.go:201] 
	W0816 05:43:49.425338   10188 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:43:49.425372   10188 out.go:270] * 
	* 
	W0816 05:43:49.427829   10188 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:43:49.436116   10188 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-576000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-576000 -n no-preload-576000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-576000 -n no-preload-576000: exit status 7 (61.572917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-576000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (11.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-023000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-023000 create -f testdata/busybox.yaml: exit status 1 (30.957792ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-023000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-023000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-023000 -n embed-certs-023000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-023000 -n embed-certs-023000: exit status 7 (33.616792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-023000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-023000 -n embed-certs-023000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-023000 -n embed-certs-023000: exit status 7 (33.203625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-023000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-023000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-023000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-023000 describe deploy/metrics-server -n kube-system: exit status 1 (27.722792ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-023000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-023000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-023000 -n embed-certs-023000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-023000 -n embed-certs-023000: exit status 7 (30.623542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-023000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (6.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-023000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-023000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (6.18720225s)

                                                
                                                
-- stdout --
	* [embed-certs-023000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-023000" primary control-plane node in "embed-certs-023000" cluster
	* Restarting existing qemu2 VM for "embed-certs-023000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-023000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:43:43.316738   10260 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:43:43.316876   10260 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:43:43.316880   10260 out.go:358] Setting ErrFile to fd 2...
	I0816 05:43:43.316883   10260 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:43:43.317015   10260 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:43:43.318104   10260 out.go:352] Setting JSON to false
	I0816 05:43:43.334139   10260 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6192,"bootTime":1723806031,"procs":502,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0816 05:43:43.334222   10260 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:43:43.338335   10260 out.go:177] * [embed-certs-023000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 05:43:43.345336   10260 notify.go:220] Checking for updates...
	I0816 05:43:43.348349   10260 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:43:43.355304   10260 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:43:43.363210   10260 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 05:43:43.371169   10260 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:43:43.379278   10260 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	I0816 05:43:43.386228   10260 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 05:43:43.390573   10260 config.go:182] Loaded profile config "embed-certs-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:43:43.390886   10260 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:43:43.394250   10260 out.go:177] * Using the qemu2 driver based on existing profile
	I0816 05:43:43.402269   10260 start.go:297] selected driver: qemu2
	I0816 05:43:43.402275   10260 start.go:901] validating driver "qemu2" against &{Name:embed-certs-023000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:embed-certs-023000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:43:43.402338   10260 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:43:43.404700   10260 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 05:43:43.404727   10260 cni.go:84] Creating CNI manager for ""
	I0816 05:43:43.404735   10260 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 05:43:43.404769   10260 start.go:340] cluster config:
	{Name:embed-certs-023000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-023000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:43:43.408337   10260 iso.go:125] acquiring lock: {Name:mkee7fdae783c25a15c40888f5bdc01a171155d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:43:43.416202   10260 out.go:177] * Starting "embed-certs-023000" primary control-plane node in "embed-certs-023000" cluster
	I0816 05:43:43.419232   10260 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:43:43.419245   10260 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 05:43:43.419253   10260 cache.go:56] Caching tarball of preloaded images
	I0816 05:43:43.419300   10260 preload.go:172] Found /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 05:43:43.419305   10260 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 05:43:43.419352   10260 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/embed-certs-023000/config.json ...
	I0816 05:43:43.419770   10260 start.go:360] acquireMachinesLock for embed-certs-023000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:43:43.419803   10260 start.go:364] duration metric: took 27.416µs to acquireMachinesLock for "embed-certs-023000"
	I0816 05:43:43.419812   10260 start.go:96] Skipping create...Using existing machine configuration
	I0816 05:43:43.419817   10260 fix.go:54] fixHost starting: 
	I0816 05:43:43.419930   10260 fix.go:112] recreateIfNeeded on embed-certs-023000: state=Stopped err=<nil>
	W0816 05:43:43.419938   10260 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 05:43:43.427265   10260 out.go:177] * Restarting existing qemu2 VM for "embed-certs-023000" ...
	I0816 05:43:43.431342   10260 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:43:43.431380   10260 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/embed-certs-023000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/embed-certs-023000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/embed-certs-023000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:1f:2c:3a:cc:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/embed-certs-023000/disk.qcow2
	I0816 05:43:43.433329   10260 main.go:141] libmachine: STDOUT: 
	I0816 05:43:43.433346   10260 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:43:43.433374   10260 fix.go:56] duration metric: took 13.557625ms for fixHost
	I0816 05:43:43.433378   10260 start.go:83] releasing machines lock for "embed-certs-023000", held for 13.571041ms
	W0816 05:43:43.433384   10260 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:43:43.433421   10260 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:43:43.433425   10260 start.go:729] Will try again in 5 seconds ...
	I0816 05:43:48.435585   10260 start.go:360] acquireMachinesLock for embed-certs-023000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:43:49.400530   10260 start.go:364] duration metric: took 964.828666ms to acquireMachinesLock for "embed-certs-023000"
	I0816 05:43:49.400695   10260 start.go:96] Skipping create...Using existing machine configuration
	I0816 05:43:49.400717   10260 fix.go:54] fixHost starting: 
	I0816 05:43:49.401415   10260 fix.go:112] recreateIfNeeded on embed-certs-023000: state=Stopped err=<nil>
	W0816 05:43:49.401440   10260 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 05:43:49.421273   10260 out.go:177] * Restarting existing qemu2 VM for "embed-certs-023000" ...
	I0816 05:43:49.428201   10260 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:43:49.428405   10260 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/embed-certs-023000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/embed-certs-023000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/embed-certs-023000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:1f:2c:3a:cc:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/embed-certs-023000/disk.qcow2
	I0816 05:43:49.437736   10260 main.go:141] libmachine: STDOUT: 
	I0816 05:43:49.437792   10260 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:43:49.437872   10260 fix.go:56] duration metric: took 37.160333ms for fixHost
	I0816 05:43:49.437892   10260 start.go:83] releasing machines lock for "embed-certs-023000", held for 37.325583ms
	W0816 05:43:49.438119   10260 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-023000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-023000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:43:49.452201   10260 out.go:201] 
	W0816 05:43:49.456452   10260 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:43:49.456537   10260 out.go:270] * 
	* 
	W0816 05:43:49.458641   10260 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:43:49.467266   10260 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-023000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-023000 -n embed-certs-023000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-023000 -n embed-certs-023000: exit status 7 (47.904375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-023000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (6.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-576000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-576000 create -f testdata/busybox.yaml: exit status 1 (30.901291ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-576000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-576000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-576000 -n no-preload-576000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-576000 -n no-preload-576000: exit status 7 (32.509166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-576000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-576000 -n no-preload-576000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-576000 -n no-preload-576000: exit status 7 (34.490583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-576000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-023000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-023000 -n embed-certs-023000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-023000 -n embed-certs-023000: exit status 7 (33.672625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-023000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-023000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-023000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-023000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.327875ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-023000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-023000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-023000 -n embed-certs-023000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-023000 -n embed-certs-023000: exit status 7 (31.863958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-023000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-576000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-576000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-576000 describe deploy/metrics-server -n kube-system: exit status 1 (28.728375ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-576000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-576000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-576000 -n no-preload-576000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-576000 -n no-preload-576000: exit status 7 (32.874ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-576000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-023000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-023000 -n embed-certs-023000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-023000 -n embed-certs-023000: exit status 7 (32.670708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-023000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-023000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-023000 --alsologtostderr -v=1: exit status 83 (42.94475ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-023000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-023000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:43:49.736890   10294 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:43:49.737023   10294 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:43:49.737026   10294 out.go:358] Setting ErrFile to fd 2...
	I0816 05:43:49.737029   10294 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:43:49.737172   10294 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:43:49.737404   10294 out.go:352] Setting JSON to false
	I0816 05:43:49.737412   10294 mustload.go:65] Loading cluster: embed-certs-023000
	I0816 05:43:49.737611   10294 config.go:182] Loaded profile config "embed-certs-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:43:49.741202   10294 out.go:177] * The control-plane node embed-certs-023000 host is not running: state=Stopped
	I0816 05:43:49.745187   10294 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-023000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-023000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-023000 -n embed-certs-023000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-023000 -n embed-certs-023000: exit status 7 (29.793541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-023000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-023000 -n embed-certs-023000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-023000 -n embed-certs-023000: exit status 7 (28.476958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-023000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-122000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-122000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.968987125s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-122000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-122000" primary control-plane node in "default-k8s-diff-port-122000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-122000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:43:50.054605   10316 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:43:50.054744   10316 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:43:50.054750   10316 out.go:358] Setting ErrFile to fd 2...
	I0816 05:43:50.054753   10316 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:43:50.054877   10316 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:43:50.055925   10316 out.go:352] Setting JSON to false
	I0816 05:43:50.072091   10316 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6199,"bootTime":1723806031,"procs":502,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0816 05:43:50.072169   10316 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:43:50.077263   10316 out.go:177] * [default-k8s-diff-port-122000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 05:43:50.084250   10316 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:43:50.084313   10316 notify.go:220] Checking for updates...
	I0816 05:43:50.091234   10316 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:43:50.094254   10316 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 05:43:50.097154   10316 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:43:50.100183   10316 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	I0816 05:43:50.103259   10316 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 05:43:50.106509   10316 config.go:182] Loaded profile config "multinode-569000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:43:50.106574   10316 config.go:182] Loaded profile config "no-preload-576000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:43:50.106618   10316 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:43:50.111191   10316 out.go:177] * Using the qemu2 driver based on user configuration
	I0816 05:43:50.118156   10316 start.go:297] selected driver: qemu2
	I0816 05:43:50.118162   10316 start.go:901] validating driver "qemu2" against <nil>
	I0816 05:43:50.118167   10316 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:43:50.120419   10316 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 05:43:50.123214   10316 out.go:177] * Automatically selected the socket_vmnet network
	I0816 05:43:50.126290   10316 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 05:43:50.126344   10316 cni.go:84] Creating CNI manager for ""
	I0816 05:43:50.126352   10316 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 05:43:50.126355   10316 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 05:43:50.126382   10316 start.go:340] cluster config:
	{Name:default-k8s-diff-port-122000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-122000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:43:50.130080   10316 iso.go:125] acquiring lock: {Name:mkee7fdae783c25a15c40888f5bdc01a171155d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:43:50.138229   10316 out.go:177] * Starting "default-k8s-diff-port-122000" primary control-plane node in "default-k8s-diff-port-122000" cluster
	I0816 05:43:50.142187   10316 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:43:50.142208   10316 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 05:43:50.142221   10316 cache.go:56] Caching tarball of preloaded images
	I0816 05:43:50.142294   10316 preload.go:172] Found /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 05:43:50.142301   10316 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 05:43:50.142388   10316 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/default-k8s-diff-port-122000/config.json ...
	I0816 05:43:50.142402   10316 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/default-k8s-diff-port-122000/config.json: {Name:mk1b1862100a8cb1d7f12ebbdb3e26aa042fc42c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:43:50.142697   10316 start.go:360] acquireMachinesLock for default-k8s-diff-port-122000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:43:50.142736   10316 start.go:364] duration metric: took 30.709µs to acquireMachinesLock for "default-k8s-diff-port-122000"
	I0816 05:43:50.142751   10316 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-122000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-122000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:43:50.142778   10316 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:43:50.151149   10316 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 05:43:50.168890   10316 start.go:159] libmachine.API.Create for "default-k8s-diff-port-122000" (driver="qemu2")
	I0816 05:43:50.168920   10316 client.go:168] LocalClient.Create starting
	I0816 05:43:50.168987   10316 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:43:50.169020   10316 main.go:141] libmachine: Decoding PEM data...
	I0816 05:43:50.169029   10316 main.go:141] libmachine: Parsing certificate...
	I0816 05:43:50.169070   10316 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:43:50.169095   10316 main.go:141] libmachine: Decoding PEM data...
	I0816 05:43:50.169106   10316 main.go:141] libmachine: Parsing certificate...
	I0816 05:43:50.169437   10316 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:43:50.324704   10316 main.go:141] libmachine: Creating SSH key...
	I0816 05:43:50.419242   10316 main.go:141] libmachine: Creating Disk image...
	I0816 05:43:50.419248   10316 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:43:50.419420   10316 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/default-k8s-diff-port-122000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/default-k8s-diff-port-122000/disk.qcow2
	I0816 05:43:50.428873   10316 main.go:141] libmachine: STDOUT: 
	I0816 05:43:50.428890   10316 main.go:141] libmachine: STDERR: 
	I0816 05:43:50.428951   10316 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/default-k8s-diff-port-122000/disk.qcow2 +20000M
	I0816 05:43:50.436915   10316 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:43:50.436936   10316 main.go:141] libmachine: STDERR: 
	I0816 05:43:50.436954   10316 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/default-k8s-diff-port-122000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/default-k8s-diff-port-122000/disk.qcow2
	I0816 05:43:50.436958   10316 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:43:50.436968   10316 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:43:50.437000   10316 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/default-k8s-diff-port-122000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/default-k8s-diff-port-122000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/default-k8s-diff-port-122000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:82:6b:cf:f0:29 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/default-k8s-diff-port-122000/disk.qcow2
	I0816 05:43:50.438705   10316 main.go:141] libmachine: STDOUT: 
	I0816 05:43:50.438720   10316 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:43:50.438736   10316 client.go:171] duration metric: took 269.816834ms to LocalClient.Create
	I0816 05:43:52.440910   10316 start.go:128] duration metric: took 2.298147167s to createHost
	I0816 05:43:52.440971   10316 start.go:83] releasing machines lock for "default-k8s-diff-port-122000", held for 2.298263375s
	W0816 05:43:52.441058   10316 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:43:52.459585   10316 out.go:177] * Deleting "default-k8s-diff-port-122000" in qemu2 ...
	W0816 05:43:52.490214   10316 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:43:52.490238   10316 start.go:729] Will try again in 5 seconds ...
	I0816 05:43:57.491482   10316 start.go:360] acquireMachinesLock for default-k8s-diff-port-122000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:43:57.501862   10316 start.go:364] duration metric: took 10.288834ms to acquireMachinesLock for "default-k8s-diff-port-122000"
	I0816 05:43:57.501914   10316 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-122000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-122000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:43:57.502156   10316 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:43:57.514303   10316 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 05:43:57.559746   10316 start.go:159] libmachine.API.Create for "default-k8s-diff-port-122000" (driver="qemu2")
	I0816 05:43:57.559803   10316 client.go:168] LocalClient.Create starting
	I0816 05:43:57.559939   10316 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:43:57.560022   10316 main.go:141] libmachine: Decoding PEM data...
	I0816 05:43:57.560038   10316 main.go:141] libmachine: Parsing certificate...
	I0816 05:43:57.560092   10316 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:43:57.560136   10316 main.go:141] libmachine: Decoding PEM data...
	I0816 05:43:57.560151   10316 main.go:141] libmachine: Parsing certificate...
	I0816 05:43:57.560675   10316 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:43:57.723253   10316 main.go:141] libmachine: Creating SSH key...
	I0816 05:43:57.935510   10316 main.go:141] libmachine: Creating Disk image...
	I0816 05:43:57.935524   10316 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:43:57.935685   10316 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/default-k8s-diff-port-122000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/default-k8s-diff-port-122000/disk.qcow2
	I0816 05:43:57.945496   10316 main.go:141] libmachine: STDOUT: 
	I0816 05:43:57.945518   10316 main.go:141] libmachine: STDERR: 
	I0816 05:43:57.945570   10316 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/default-k8s-diff-port-122000/disk.qcow2 +20000M
	I0816 05:43:57.954499   10316 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:43:57.954526   10316 main.go:141] libmachine: STDERR: 
	I0816 05:43:57.954536   10316 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/default-k8s-diff-port-122000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/default-k8s-diff-port-122000/disk.qcow2
	I0816 05:43:57.954540   10316 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:43:57.954549   10316 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:43:57.954584   10316 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/default-k8s-diff-port-122000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/default-k8s-diff-port-122000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/default-k8s-diff-port-122000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:80:e7:6b:aa:9c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/default-k8s-diff-port-122000/disk.qcow2
	I0816 05:43:57.956343   10316 main.go:141] libmachine: STDOUT: 
	I0816 05:43:57.956378   10316 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:43:57.956390   10316 client.go:171] duration metric: took 396.587792ms to LocalClient.Create
	I0816 05:43:59.958554   10316 start.go:128] duration metric: took 2.456410084s to createHost
	I0816 05:43:59.958618   10316 start.go:83] releasing machines lock for "default-k8s-diff-port-122000", held for 2.456773s
	W0816 05:43:59.958955   10316 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-122000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-122000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:43:59.970530   10316 out.go:201] 
	W0816 05:43:59.973561   10316 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:43:59.973591   10316 out.go:270] * 
	* 
	W0816 05:43:59.975582   10316 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:43:59.985516   10316 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-122000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-122000 -n default-k8s-diff-port-122000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-122000 -n default-k8s-diff-port-122000: exit status 7 (50.850333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-122000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-576000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-576000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.8756305s)

                                                
                                                
-- stdout --
	* [no-preload-576000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-576000" primary control-plane node in "no-preload-576000" cluster
	* Restarting existing qemu2 VM for "no-preload-576000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-576000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:43:51.698557   10336 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:43:51.698678   10336 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:43:51.698681   10336 out.go:358] Setting ErrFile to fd 2...
	I0816 05:43:51.698683   10336 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:43:51.698815   10336 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:43:51.699878   10336 out.go:352] Setting JSON to false
	I0816 05:43:51.715803   10336 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6200,"bootTime":1723806031,"procs":502,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0816 05:43:51.715880   10336 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:43:51.719783   10336 out.go:177] * [no-preload-576000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 05:43:51.726688   10336 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:43:51.726725   10336 notify.go:220] Checking for updates...
	I0816 05:43:51.734772   10336 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:43:51.738633   10336 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 05:43:51.741659   10336 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:43:51.744686   10336 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	I0816 05:43:51.747639   10336 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 05:43:51.751007   10336 config.go:182] Loaded profile config "no-preload-576000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:43:51.751254   10336 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:43:51.755644   10336 out.go:177] * Using the qemu2 driver based on existing profile
	I0816 05:43:51.762692   10336 start.go:297] selected driver: qemu2
	I0816 05:43:51.762699   10336 start.go:901] validating driver "qemu2" against &{Name:no-preload-576000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:no-preload-576000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:43:51.762754   10336 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:43:51.764927   10336 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 05:43:51.764965   10336 cni.go:84] Creating CNI manager for ""
	I0816 05:43:51.764973   10336 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 05:43:51.765001   10336 start.go:340] cluster config:
	{Name:no-preload-576000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-576000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:43:51.768358   10336 iso.go:125] acquiring lock: {Name:mkee7fdae783c25a15c40888f5bdc01a171155d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:43:51.774648   10336 out.go:177] * Starting "no-preload-576000" primary control-plane node in "no-preload-576000" cluster
	I0816 05:43:51.778669   10336 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:43:51.778756   10336 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/no-preload-576000/config.json ...
	I0816 05:43:51.778774   10336 cache.go:107] acquiring lock: {Name:mk0ee725585939851e658401112124e8d27976db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:43:51.778776   10336 cache.go:107] acquiring lock: {Name:mkc7d0c3f160b662328e209741ea0ae0b3fa8393 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:43:51.778795   10336 cache.go:107] acquiring lock: {Name:mk157234e051671ad3c50e2fb9901723312d8d1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:43:51.778853   10336 cache.go:115] /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0816 05:43:51.778861   10336 cache.go:115] /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 exists
	I0816 05:43:51.778866   10336 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0" -> "/Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0" took 106.625µs
	I0816 05:43:51.778869   10336 cache.go:115] /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 exists
	I0816 05:43:51.778867   10336 cache.go:107] acquiring lock: {Name:mk9a8e77b0d11462130a0c07c8ac41a530de757e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:43:51.778897   10336 cache.go:107] acquiring lock: {Name:mk461f9517e3a93523215c91aa78bfdd8c0d2b63 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:43:51.778872   10336 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0 -> /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 succeeded
	I0816 05:43:51.778860   10336 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 90.916µs
	I0816 05:43:51.778931   10336 cache.go:115] /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0816 05:43:51.778935   10336 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 68.75µs
	I0816 05:43:51.778923   10336 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0816 05:43:51.778879   10336 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0" -> "/Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0" took 103.041µs
	I0816 05:43:51.778968   10336 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0 -> /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 succeeded
	I0816 05:43:51.778879   10336 cache.go:107] acquiring lock: {Name:mkc7c1aeac3cb675af946cebdce88aa9a925a2a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:43:51.778888   10336 cache.go:107] acquiring lock: {Name:mk6ba78cd053f31de333b084a8280b7fbbd3a623 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:43:51.778940   10336 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0816 05:43:51.778943   10336 cache.go:115] /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 exists
	I0816 05:43:51.778994   10336 cache.go:107] acquiring lock: {Name:mk329c0b5aaaf5895567c32fd5de81d3aee0d999 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:43:51.778997   10336 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0" -> "/Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0" took 100.5µs
	I0816 05:43:51.779002   10336 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0 -> /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 succeeded
	I0816 05:43:51.779029   10336 cache.go:115] /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0816 05:43:51.779035   10336 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 156.5µs
	I0816 05:43:51.779039   10336 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0816 05:43:51.779040   10336 cache.go:115] /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 exists
	I0816 05:43:51.779044   10336 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0" -> "/Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0" took 156.875µs
	I0816 05:43:51.779048   10336 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0 -> /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 succeeded
	I0816 05:43:51.779070   10336 cache.go:115] /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0816 05:43:51.779074   10336 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 103.208µs
	I0816 05:43:51.779081   10336 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0816 05:43:51.779085   10336 cache.go:87] Successfully saved all images to host disk.
	I0816 05:43:51.779201   10336 start.go:360] acquireMachinesLock for no-preload-576000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:43:52.441154   10336 start.go:364] duration metric: took 661.904541ms to acquireMachinesLock for "no-preload-576000"
	I0816 05:43:52.441330   10336 start.go:96] Skipping create...Using existing machine configuration
	I0816 05:43:52.441368   10336 fix.go:54] fixHost starting: 
	I0816 05:43:52.442010   10336 fix.go:112] recreateIfNeeded on no-preload-576000: state=Stopped err=<nil>
	W0816 05:43:52.442053   10336 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 05:43:52.451644   10336 out.go:177] * Restarting existing qemu2 VM for "no-preload-576000" ...
	I0816 05:43:52.462516   10336 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:43:52.462719   10336 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/no-preload-576000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/no-preload-576000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/no-preload-576000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:d3:8e:e7:e1:70 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/no-preload-576000/disk.qcow2
	I0816 05:43:52.473788   10336 main.go:141] libmachine: STDOUT: 
	I0816 05:43:52.473881   10336 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:43:52.473996   10336 fix.go:56] duration metric: took 32.630458ms for fixHost
	I0816 05:43:52.474018   10336 start.go:83] releasing machines lock for "no-preload-576000", held for 32.83525ms
	W0816 05:43:52.474049   10336 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:43:52.474203   10336 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:43:52.474220   10336 start.go:729] Will try again in 5 seconds ...
	I0816 05:43:57.476393   10336 start.go:360] acquireMachinesLock for no-preload-576000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:43:57.476824   10336 start.go:364] duration metric: took 341.459µs to acquireMachinesLock for "no-preload-576000"
	I0816 05:43:57.476949   10336 start.go:96] Skipping create...Using existing machine configuration
	I0816 05:43:57.476970   10336 fix.go:54] fixHost starting: 
	I0816 05:43:57.477757   10336 fix.go:112] recreateIfNeeded on no-preload-576000: state=Stopped err=<nil>
	W0816 05:43:57.477788   10336 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 05:43:57.483312   10336 out.go:177] * Restarting existing qemu2 VM for "no-preload-576000" ...
	I0816 05:43:57.491263   10336 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:43:57.491570   10336 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/no-preload-576000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/no-preload-576000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/no-preload-576000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:d3:8e:e7:e1:70 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/no-preload-576000/disk.qcow2
	I0816 05:43:57.501628   10336 main.go:141] libmachine: STDOUT: 
	I0816 05:43:57.501690   10336 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:43:57.501785   10336 fix.go:56] duration metric: took 24.814ms for fixHost
	I0816 05:43:57.501803   10336 start.go:83] releasing machines lock for "no-preload-576000", held for 24.955292ms
	W0816 05:43:57.501967   10336 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-576000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-576000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:43:57.517176   10336 out.go:201] 
	W0816 05:43:57.521274   10336 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:43:57.521302   10336 out.go:270] * 
	* 
	W0816 05:43:57.523210   10336 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:43:57.536248   10336 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-576000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-576000 -n no-preload-576000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-576000 -n no-preload-576000: exit status 7 (47.357375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-576000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-576000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-576000 -n no-preload-576000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-576000 -n no-preload-576000: exit status 7 (33.802ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-576000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-576000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-576000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-576000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (30.400625ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-576000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-576000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-576000 -n no-preload-576000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-576000 -n no-preload-576000: exit status 7 (34.391083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-576000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-576000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-576000 -n no-preload-576000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-576000 -n no-preload-576000: exit status 7 (30.989875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-576000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-576000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-576000 --alsologtostderr -v=1: exit status 83 (41.562583ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-576000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-576000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:43:57.801728   10356 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:43:57.801885   10356 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:43:57.801888   10356 out.go:358] Setting ErrFile to fd 2...
	I0816 05:43:57.801891   10356 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:43:57.802033   10356 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:43:57.802280   10356 out.go:352] Setting JSON to false
	I0816 05:43:57.802287   10356 mustload.go:65] Loading cluster: no-preload-576000
	I0816 05:43:57.802476   10356 config.go:182] Loaded profile config "no-preload-576000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:43:57.806139   10356 out.go:177] * The control-plane node no-preload-576000 host is not running: state=Stopped
	I0816 05:43:57.809197   10356 out.go:177]   To start a cluster, run: "minikube start -p no-preload-576000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-576000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-576000 -n no-preload-576000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-576000 -n no-preload-576000: exit status 7 (29.706833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-576000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-576000 -n no-preload-576000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-576000 -n no-preload-576000: exit status 7 (30.480833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-576000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (11.67s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-301000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-301000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (11.600756791s)

                                                
                                                
-- stdout --
	* [newest-cni-301000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-301000" primary control-plane node in "newest-cni-301000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-301000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:43:58.113941   10376 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:43:58.114074   10376 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:43:58.114080   10376 out.go:358] Setting ErrFile to fd 2...
	I0816 05:43:58.114082   10376 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:43:58.114223   10376 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:43:58.115308   10376 out.go:352] Setting JSON to false
	I0816 05:43:58.131423   10376 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6207,"bootTime":1723806031,"procs":502,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0816 05:43:58.131488   10376 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:43:58.136197   10376 out.go:177] * [newest-cni-301000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 05:43:58.142153   10376 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:43:58.142200   10376 notify.go:220] Checking for updates...
	I0816 05:43:58.149133   10376 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:43:58.152226   10376 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 05:43:58.156211   10376 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:43:58.159248   10376 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	I0816 05:43:58.162190   10376 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 05:43:58.165465   10376 config.go:182] Loaded profile config "default-k8s-diff-port-122000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:43:58.165527   10376 config.go:182] Loaded profile config "multinode-569000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:43:58.165584   10376 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:43:58.169174   10376 out.go:177] * Using the qemu2 driver based on user configuration
	I0816 05:43:58.176227   10376 start.go:297] selected driver: qemu2
	I0816 05:43:58.176235   10376 start.go:901] validating driver "qemu2" against <nil>
	I0816 05:43:58.176249   10376 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:43:58.178420   10376 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0816 05:43:58.178446   10376 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0816 05:43:58.182198   10376 out.go:177] * Automatically selected the socket_vmnet network
	I0816 05:43:58.189793   10376 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0816 05:43:58.189811   10376 cni.go:84] Creating CNI manager for ""
	I0816 05:43:58.189819   10376 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 05:43:58.189824   10376 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 05:43:58.189851   10376 start.go:340] cluster config:
	{Name:newest-cni-301000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-301000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:43:58.193446   10376 iso.go:125] acquiring lock: {Name:mkee7fdae783c25a15c40888f5bdc01a171155d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:43:58.201126   10376 out.go:177] * Starting "newest-cni-301000" primary control-plane node in "newest-cni-301000" cluster
	I0816 05:43:58.205235   10376 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:43:58.205253   10376 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 05:43:58.205263   10376 cache.go:56] Caching tarball of preloaded images
	I0816 05:43:58.205344   10376 preload.go:172] Found /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 05:43:58.205350   10376 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 05:43:58.205455   10376 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/newest-cni-301000/config.json ...
	I0816 05:43:58.205466   10376 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/newest-cni-301000/config.json: {Name:mkfb4c12ef33227cecfff98f10aec150e718106d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:43:58.205710   10376 start.go:360] acquireMachinesLock for newest-cni-301000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:43:59.958733   10376 start.go:364] duration metric: took 1.753021708s to acquireMachinesLock for "newest-cni-301000"
	I0816 05:43:59.959007   10376 start.go:93] Provisioning new machine with config: &{Name:newest-cni-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:newest-cni-301000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:43:59.959223   10376 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:43:59.967550   10376 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 05:44:00.015843   10376 start.go:159] libmachine.API.Create for "newest-cni-301000" (driver="qemu2")
	I0816 05:44:00.015880   10376 client.go:168] LocalClient.Create starting
	I0816 05:44:00.015998   10376 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:44:00.016055   10376 main.go:141] libmachine: Decoding PEM data...
	I0816 05:44:00.016079   10376 main.go:141] libmachine: Parsing certificate...
	I0816 05:44:00.016143   10376 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:44:00.016186   10376 main.go:141] libmachine: Decoding PEM data...
	I0816 05:44:00.016198   10376 main.go:141] libmachine: Parsing certificate...
	I0816 05:44:00.016777   10376 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:44:00.179333   10376 main.go:141] libmachine: Creating SSH key...
	I0816 05:44:00.246569   10376 main.go:141] libmachine: Creating Disk image...
	I0816 05:44:00.246578   10376 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:44:00.246776   10376 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/newest-cni-301000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/newest-cni-301000/disk.qcow2
	I0816 05:44:00.256711   10376 main.go:141] libmachine: STDOUT: 
	I0816 05:44:00.256738   10376 main.go:141] libmachine: STDERR: 
	I0816 05:44:00.256795   10376 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/newest-cni-301000/disk.qcow2 +20000M
	I0816 05:44:00.266694   10376 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:44:00.266723   10376 main.go:141] libmachine: STDERR: 
	I0816 05:44:00.266747   10376 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/newest-cni-301000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/newest-cni-301000/disk.qcow2
	I0816 05:44:00.266754   10376 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:44:00.266768   10376 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:44:00.266799   10376 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/newest-cni-301000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/newest-cni-301000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/newest-cni-301000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:4c:51:91:f7:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/newest-cni-301000/disk.qcow2
	I0816 05:44:00.268661   10376 main.go:141] libmachine: STDOUT: 
	I0816 05:44:00.268681   10376 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:44:00.268701   10376 client.go:171] duration metric: took 252.80875ms to LocalClient.Create
	I0816 05:44:02.270876   10376 start.go:128] duration metric: took 2.31165425s to createHost
	I0816 05:44:02.270971   10376 start.go:83] releasing machines lock for "newest-cni-301000", held for 2.312155334s
	W0816 05:44:02.271067   10376 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:44:02.280596   10376 out.go:177] * Deleting "newest-cni-301000" in qemu2 ...
	W0816 05:44:02.309681   10376 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:44:02.309707   10376 start.go:729] Will try again in 5 seconds ...
	I0816 05:44:07.311878   10376 start.go:360] acquireMachinesLock for newest-cni-301000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:44:07.312349   10376 start.go:364] duration metric: took 364.25µs to acquireMachinesLock for "newest-cni-301000"
	I0816 05:44:07.312481   10376 start.go:93] Provisioning new machine with config: &{Name:newest-cni-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:newest-cni-301000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:44:07.312818   10376 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 05:44:07.322317   10376 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 05:44:07.373231   10376 start.go:159] libmachine.API.Create for "newest-cni-301000" (driver="qemu2")
	I0816 05:44:07.373279   10376 client.go:168] LocalClient.Create starting
	I0816 05:44:07.373402   10376 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/ca.pem
	I0816 05:44:07.373469   10376 main.go:141] libmachine: Decoding PEM data...
	I0816 05:44:07.373486   10376 main.go:141] libmachine: Parsing certificate...
	I0816 05:44:07.373552   10376 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-6249/.minikube/certs/cert.pem
	I0816 05:44:07.373608   10376 main.go:141] libmachine: Decoding PEM data...
	I0816 05:44:07.373621   10376 main.go:141] libmachine: Parsing certificate...
	I0816 05:44:07.374345   10376 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0816 05:44:07.539105   10376 main.go:141] libmachine: Creating SSH key...
	I0816 05:44:07.605228   10376 main.go:141] libmachine: Creating Disk image...
	I0816 05:44:07.605237   10376 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 05:44:07.605446   10376 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/newest-cni-301000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/newest-cni-301000/disk.qcow2
	I0816 05:44:07.614934   10376 main.go:141] libmachine: STDOUT: 
	I0816 05:44:07.614955   10376 main.go:141] libmachine: STDERR: 
	I0816 05:44:07.615001   10376 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/newest-cni-301000/disk.qcow2 +20000M
	I0816 05:44:07.622884   10376 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 05:44:07.622909   10376 main.go:141] libmachine: STDERR: 
	I0816 05:44:07.622921   10376 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/newest-cni-301000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/newest-cni-301000/disk.qcow2
	I0816 05:44:07.622926   10376 main.go:141] libmachine: Starting QEMU VM...
	I0816 05:44:07.622935   10376 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:44:07.622996   10376 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/newest-cni-301000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/newest-cni-301000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/newest-cni-301000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:35:c0:16:77:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/newest-cni-301000/disk.qcow2
	I0816 05:44:07.624665   10376 main.go:141] libmachine: STDOUT: 
	I0816 05:44:07.624685   10376 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:44:07.624700   10376 client.go:171] duration metric: took 251.419125ms to LocalClient.Create
	I0816 05:44:09.626959   10376 start.go:128] duration metric: took 2.314114042s to createHost
	I0816 05:44:09.627039   10376 start.go:83] releasing machines lock for "newest-cni-301000", held for 2.314698875s
	W0816 05:44:09.627499   10376 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-301000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-301000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:44:09.638969   10376 out.go:201] 
	W0816 05:44:09.650146   10376 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:44:09.650173   10376 out.go:270] * 
	* 
	W0816 05:44:09.652869   10376 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:44:09.664922   10376 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-301000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-301000 -n newest-cni-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-301000 -n newest-cni-301000: exit status 7 (67.722042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (11.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-122000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-122000 create -f testdata/busybox.yaml: exit status 1 (30.861166ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-122000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-122000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-122000 -n default-k8s-diff-port-122000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-122000 -n default-k8s-diff-port-122000: exit status 7 (34.498875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-122000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-122000 -n default-k8s-diff-port-122000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-122000 -n default-k8s-diff-port-122000: exit status 7 (34.487625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-122000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-122000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-122000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-122000 describe deploy/metrics-server -n kube-system: exit status 1 (27.729583ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-122000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-122000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-122000 -n default-k8s-diff-port-122000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-122000 -n default-k8s-diff-port-122000: exit status 7 (29.833667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-122000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-122000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-122000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (6.294115666s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-122000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-122000" primary control-plane node in "default-k8s-diff-port-122000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-122000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-122000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:44:03.441306   10420 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:44:03.441430   10420 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:44:03.441434   10420 out.go:358] Setting ErrFile to fd 2...
	I0816 05:44:03.441436   10420 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:44:03.441570   10420 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:44:03.442580   10420 out.go:352] Setting JSON to false
	I0816 05:44:03.458634   10420 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6212,"bootTime":1723806031,"procs":502,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0816 05:44:03.458698   10420 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:44:03.463279   10420 out.go:177] * [default-k8s-diff-port-122000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 05:44:03.469170   10420 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:44:03.469218   10420 notify.go:220] Checking for updates...
	I0816 05:44:03.477225   10420 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:44:03.481247   10420 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 05:44:03.484302   10420 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:44:03.487334   10420 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	I0816 05:44:03.490289   10420 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 05:44:03.493592   10420 config.go:182] Loaded profile config "default-k8s-diff-port-122000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:44:03.493853   10420 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:44:03.498229   10420 out.go:177] * Using the qemu2 driver based on existing profile
	I0816 05:44:03.505276   10420 start.go:297] selected driver: qemu2
	I0816 05:44:03.505283   10420 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-122000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-122000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:44:03.505360   10420 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:44:03.507817   10420 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 05:44:03.507861   10420 cni.go:84] Creating CNI manager for ""
	I0816 05:44:03.507870   10420 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 05:44:03.507899   10420 start.go:340] cluster config:
	{Name:default-k8s-diff-port-122000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-122000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:44:03.511590   10420 iso.go:125] acquiring lock: {Name:mkee7fdae783c25a15c40888f5bdc01a171155d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:44:03.520309   10420 out.go:177] * Starting "default-k8s-diff-port-122000" primary control-plane node in "default-k8s-diff-port-122000" cluster
	I0816 05:44:03.524322   10420 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:44:03.524337   10420 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 05:44:03.524347   10420 cache.go:56] Caching tarball of preloaded images
	I0816 05:44:03.524406   10420 preload.go:172] Found /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 05:44:03.524411   10420 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 05:44:03.524472   10420 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/default-k8s-diff-port-122000/config.json ...
	I0816 05:44:03.524928   10420 start.go:360] acquireMachinesLock for default-k8s-diff-port-122000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:44:03.524964   10420 start.go:364] duration metric: took 29.833µs to acquireMachinesLock for "default-k8s-diff-port-122000"
	I0816 05:44:03.524977   10420 start.go:96] Skipping create...Using existing machine configuration
	I0816 05:44:03.524984   10420 fix.go:54] fixHost starting: 
	I0816 05:44:03.525105   10420 fix.go:112] recreateIfNeeded on default-k8s-diff-port-122000: state=Stopped err=<nil>
	W0816 05:44:03.525115   10420 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 05:44:03.529220   10420 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-122000" ...
	I0816 05:44:03.537169   10420 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:44:03.537209   10420 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/default-k8s-diff-port-122000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/default-k8s-diff-port-122000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/default-k8s-diff-port-122000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:80:e7:6b:aa:9c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/default-k8s-diff-port-122000/disk.qcow2
	I0816 05:44:03.539288   10420 main.go:141] libmachine: STDOUT: 
	I0816 05:44:03.539314   10420 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:44:03.539347   10420 fix.go:56] duration metric: took 14.364417ms for fixHost
	I0816 05:44:03.539353   10420 start.go:83] releasing machines lock for "default-k8s-diff-port-122000", held for 14.38075ms
	W0816 05:44:03.539359   10420 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:44:03.539399   10420 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:44:03.539404   10420 start.go:729] Will try again in 5 seconds ...
	I0816 05:44:08.541477   10420 start.go:360] acquireMachinesLock for default-k8s-diff-port-122000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:44:09.627231   10420 start.go:364] duration metric: took 1.085662833s to acquireMachinesLock for "default-k8s-diff-port-122000"
	I0816 05:44:09.627407   10420 start.go:96] Skipping create...Using existing machine configuration
	I0816 05:44:09.627427   10420 fix.go:54] fixHost starting: 
	I0816 05:44:09.628140   10420 fix.go:112] recreateIfNeeded on default-k8s-diff-port-122000: state=Stopped err=<nil>
	W0816 05:44:09.628167   10420 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 05:44:09.646999   10420 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-122000" ...
	I0816 05:44:09.653019   10420 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:44:09.653191   10420 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/default-k8s-diff-port-122000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/default-k8s-diff-port-122000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/default-k8s-diff-port-122000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:80:e7:6b:aa:9c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/default-k8s-diff-port-122000/disk.qcow2
	I0816 05:44:09.661897   10420 main.go:141] libmachine: STDOUT: 
	I0816 05:44:09.661960   10420 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:44:09.662037   10420 fix.go:56] duration metric: took 34.613584ms for fixHost
	I0816 05:44:09.662058   10420 start.go:83] releasing machines lock for "default-k8s-diff-port-122000", held for 34.789ms
	W0816 05:44:09.662278   10420 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-122000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-122000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:44:09.677124   10420 out.go:201] 
	W0816 05:44:09.680989   10420 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:44:09.681025   10420 out.go:270] * 
	* 
	W0816 05:44:09.683750   10420 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:44:09.693888   10420 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-122000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-122000 -n default-k8s-diff-port-122000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-122000 -n default-k8s-diff-port-122000: exit status 7 (56.710333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-122000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-122000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-122000 -n default-k8s-diff-port-122000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-122000 -n default-k8s-diff-port-122000: exit status 7 (39.810666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-122000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-122000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-122000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-122000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.907666ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-122000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-122000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-122000 -n default-k8s-diff-port-122000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-122000 -n default-k8s-diff-port-122000: exit status 7 (31.087458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-122000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-122000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-122000 -n default-k8s-diff-port-122000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-122000 -n default-k8s-diff-port-122000: exit status 7 (29.357375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-122000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-122000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-122000 --alsologtostderr -v=1: exit status 83 (41.191167ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-122000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-122000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:44:09.959224   10451 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:44:09.959379   10451 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:44:09.959386   10451 out.go:358] Setting ErrFile to fd 2...
	I0816 05:44:09.959389   10451 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:44:09.959513   10451 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:44:09.959717   10451 out.go:352] Setting JSON to false
	I0816 05:44:09.959726   10451 mustload.go:65] Loading cluster: default-k8s-diff-port-122000
	I0816 05:44:09.959899   10451 config.go:182] Loaded profile config "default-k8s-diff-port-122000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:44:09.963885   10451 out.go:177] * The control-plane node default-k8s-diff-port-122000 host is not running: state=Stopped
	I0816 05:44:09.968014   10451 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-122000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-122000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-122000 -n default-k8s-diff-port-122000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-122000 -n default-k8s-diff-port-122000: exit status 7 (29.8545ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-122000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-122000 -n default-k8s-diff-port-122000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-122000 -n default-k8s-diff-port-122000: exit status 7 (29.083708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-122000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-301000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-301000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.188501083s)

                                                
                                                
-- stdout --
	* [newest-cni-301000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-301000" primary control-plane node in "newest-cni-301000" cluster
	* Restarting existing qemu2 VM for "newest-cni-301000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-301000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:44:11.807040   10478 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:44:11.807176   10478 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:44:11.807179   10478 out.go:358] Setting ErrFile to fd 2...
	I0816 05:44:11.807182   10478 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:44:11.807311   10478 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:44:11.809051   10478 out.go:352] Setting JSON to false
	I0816 05:44:11.825588   10478 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6220,"bootTime":1723806031,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0816 05:44:11.825660   10478 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:44:11.830305   10478 out.go:177] * [newest-cni-301000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 05:44:11.837276   10478 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:44:11.837326   10478 notify.go:220] Checking for updates...
	I0816 05:44:11.844172   10478 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:44:11.848207   10478 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 05:44:11.851335   10478 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:44:11.854301   10478 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	I0816 05:44:11.857307   10478 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 05:44:11.860633   10478 config.go:182] Loaded profile config "newest-cni-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:44:11.860904   10478 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:44:11.864277   10478 out.go:177] * Using the qemu2 driver based on existing profile
	I0816 05:44:11.871275   10478 start.go:297] selected driver: qemu2
	I0816 05:44:11.871289   10478 start.go:901] validating driver "qemu2" against &{Name:newest-cni-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:newest-cni-301000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:44:11.871354   10478 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:44:11.873890   10478 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0816 05:44:11.873937   10478 cni.go:84] Creating CNI manager for ""
	I0816 05:44:11.873945   10478 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 05:44:11.873974   10478 start.go:340] cluster config:
	{Name:newest-cni-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-301000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:44:11.877667   10478 iso.go:125] acquiring lock: {Name:mkee7fdae783c25a15c40888f5bdc01a171155d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:44:11.885280   10478 out.go:177] * Starting "newest-cni-301000" primary control-plane node in "newest-cni-301000" cluster
	I0816 05:44:11.889163   10478 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:44:11.889183   10478 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 05:44:11.889192   10478 cache.go:56] Caching tarball of preloaded images
	I0816 05:44:11.889255   10478 preload.go:172] Found /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 05:44:11.889262   10478 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 05:44:11.889337   10478 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/newest-cni-301000/config.json ...
	I0816 05:44:11.889816   10478 start.go:360] acquireMachinesLock for newest-cni-301000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:44:11.889854   10478 start.go:364] duration metric: took 31.167µs to acquireMachinesLock for "newest-cni-301000"
	I0816 05:44:11.889864   10478 start.go:96] Skipping create...Using existing machine configuration
	I0816 05:44:11.889871   10478 fix.go:54] fixHost starting: 
	I0816 05:44:11.890004   10478 fix.go:112] recreateIfNeeded on newest-cni-301000: state=Stopped err=<nil>
	W0816 05:44:11.890013   10478 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 05:44:11.894260   10478 out.go:177] * Restarting existing qemu2 VM for "newest-cni-301000" ...
	I0816 05:44:11.902222   10478 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:44:11.902257   10478 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/newest-cni-301000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/newest-cni-301000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/newest-cni-301000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:35:c0:16:77:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/newest-cni-301000/disk.qcow2
	I0816 05:44:11.904475   10478 main.go:141] libmachine: STDOUT: 
	I0816 05:44:11.904495   10478 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:44:11.904527   10478 fix.go:56] duration metric: took 14.655958ms for fixHost
	I0816 05:44:11.904532   10478 start.go:83] releasing machines lock for "newest-cni-301000", held for 14.673834ms
	W0816 05:44:11.904538   10478 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:44:11.904572   10478 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:44:11.904577   10478 start.go:729] Will try again in 5 seconds ...
	I0816 05:44:16.906680   10478 start.go:360] acquireMachinesLock for newest-cni-301000: {Name:mk2040da30c1d031095a714214b64c0e536521c7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:44:16.907062   10478 start.go:364] duration metric: took 294.5µs to acquireMachinesLock for "newest-cni-301000"
	I0816 05:44:16.907200   10478 start.go:96] Skipping create...Using existing machine configuration
	I0816 05:44:16.907219   10478 fix.go:54] fixHost starting: 
	I0816 05:44:16.907909   10478 fix.go:112] recreateIfNeeded on newest-cni-301000: state=Stopped err=<nil>
	W0816 05:44:16.907940   10478 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 05:44:16.917463   10478 out.go:177] * Restarting existing qemu2 VM for "newest-cni-301000" ...
	I0816 05:44:16.921497   10478 qemu.go:418] Using hvf for hardware acceleration
	I0816 05:44:16.921791   10478 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/newest-cni-301000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-6249/.minikube/machines/newest-cni-301000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/newest-cni-301000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:35:c0:16:77:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-6249/.minikube/machines/newest-cni-301000/disk.qcow2
	I0816 05:44:16.930648   10478 main.go:141] libmachine: STDOUT: 
	I0816 05:44:16.930713   10478 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 05:44:16.930786   10478 fix.go:56] duration metric: took 23.565042ms for fixHost
	I0816 05:44:16.930804   10478 start.go:83] releasing machines lock for "newest-cni-301000", held for 23.717708ms
	W0816 05:44:16.930952   10478 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-301000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-301000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 05:44:16.938466   10478 out.go:201] 
	W0816 05:44:16.942507   10478 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 05:44:16.942532   10478 out.go:270] * 
	* 
	W0816 05:44:16.945126   10478 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:44:16.953393   10478 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-301000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-301000 -n newest-cni-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-301000 -n newest-cni-301000: exit status 7 (68.751417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-301000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-301000 -n newest-cni-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-301000 -n newest-cni-301000: exit status 7 (30.798209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-301000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-301000 --alsologtostderr -v=1: exit status 83 (41.801167ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-301000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-301000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:44:17.137923   10492 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:44:17.138092   10492 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:44:17.138096   10492 out.go:358] Setting ErrFile to fd 2...
	I0816 05:44:17.138098   10492 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:44:17.138224   10492 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:44:17.138438   10492 out.go:352] Setting JSON to false
	I0816 05:44:17.138447   10492 mustload.go:65] Loading cluster: newest-cni-301000
	I0816 05:44:17.138638   10492 config.go:182] Loaded profile config "newest-cni-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:44:17.142989   10492 out.go:177] * The control-plane node newest-cni-301000 host is not running: state=Stopped
	I0816 05:44:17.146929   10492 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-301000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-301000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-301000 -n newest-cni-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-301000 -n newest-cni-301000: exit status 7 (29.991417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-301000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-301000 -n newest-cni-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-301000 -n newest-cni-301000: exit status 7 (29.710542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (80/258)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.31.0/json-events 7.31
13 TestDownloadOnly/v1.31.0/preload-exists 0
16 TestDownloadOnly/v1.31.0/kubectl 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.07
18 TestDownloadOnly/v1.31.0/DeleteAll 0.11
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.1
21 TestBinaryMirror 0.3
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
35 TestHyperKitDriverInstallOrUpdate 10.08
39 TestErrorSpam/start 0.39
40 TestErrorSpam/status 0.09
41 TestErrorSpam/pause 0.12
42 TestErrorSpam/unpause 0.12
43 TestErrorSpam/stop 10.62
46 TestFunctional/serial/CopySyncFile 0
48 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/CacheCmd/cache/add_remote 1.84
55 TestFunctional/serial/CacheCmd/cache/add_local 1.04
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
57 TestFunctional/serial/CacheCmd/cache/list 0.03
60 TestFunctional/serial/CacheCmd/cache/delete 0.07
69 TestFunctional/parallel/ConfigCmd 0.22
71 TestFunctional/parallel/DryRun 0.27
72 TestFunctional/parallel/InternationalLanguage 0.11
78 TestFunctional/parallel/AddonsCmd 0.09
93 TestFunctional/parallel/License 0.32
94 TestFunctional/parallel/Version/short 0.04
101 TestFunctional/parallel/ImageCommands/Setup 1.69
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
122 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
124 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.08
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.09
126 TestFunctional/parallel/ProfileCmd/profile_list 0.08
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.08
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
135 TestFunctional/delete_echo-server_images 0.07
136 TestFunctional/delete_my-image_image 0.02
137 TestFunctional/delete_minikube_cached_images 0.02
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 2.13
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.2
193 TestMainNoArgs 0.03
240 TestStoppedBinaryUpgrade/Setup 1.1
252 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
256 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
257 TestNoKubernetes/serial/ProfileList 31.43
258 TestNoKubernetes/serial/Stop 2.9
260 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
270 TestStoppedBinaryUpgrade/MinikubeLogs 0.72
275 TestStartStop/group/old-k8s-version/serial/Stop 3.39
276 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.11
288 TestStartStop/group/embed-certs/serial/Stop 3.29
289 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
297 TestStartStop/group/no-preload/serial/Stop 1.82
300 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
310 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.01
311 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
313 TestStartStop/group/newest-cni/serial/DeployApp 0
314 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
317 TestStartStop/group/newest-cni/serial/Stop 1.83
320 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
322 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-222000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-222000: exit status 85 (100.806292ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-222000 | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT |          |
	|         | -p download-only-222000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 05:19:22
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 05:19:22.630328    6748 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:19:22.630470    6748 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:19:22.630473    6748 out.go:358] Setting ErrFile to fd 2...
	I0816 05:19:22.630475    6748 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:19:22.630591    6748 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	W0816 05:19:22.630680    6748 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19423-6249/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19423-6249/.minikube/config/config.json: no such file or directory
	I0816 05:19:22.632045    6748 out.go:352] Setting JSON to true
	I0816 05:19:22.648892    6748 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4731,"bootTime":1723806031,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0816 05:19:22.648956    6748 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:19:22.653161    6748 out.go:97] [download-only-222000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 05:19:22.653273    6748 notify.go:220] Checking for updates...
	W0816 05:19:22.653324    6748 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball: no such file or directory
	I0816 05:19:22.657711    6748 out.go:169] MINIKUBE_LOCATION=19423
	I0816 05:19:22.661170    6748 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:19:22.666794    6748 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 05:19:22.671069    6748 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:19:22.675124    6748 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	W0816 05:19:22.682098    6748 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0816 05:19:22.682342    6748 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:19:22.685831    6748 out.go:97] Using the qemu2 driver based on user configuration
	I0816 05:19:22.685850    6748 start.go:297] selected driver: qemu2
	I0816 05:19:22.685854    6748 start.go:901] validating driver "qemu2" against <nil>
	I0816 05:19:22.685923    6748 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 05:19:22.689406    6748 out.go:169] Automatically selected the socket_vmnet network
	I0816 05:19:22.696222    6748 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0816 05:19:22.696320    6748 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0816 05:19:22.696411    6748 cni.go:84] Creating CNI manager for ""
	I0816 05:19:22.696417    6748 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0816 05:19:22.696478    6748 start.go:340] cluster config:
	{Name:download-only-222000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-222000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:19:22.700322    6748 iso.go:125] acquiring lock: {Name:mkee7fdae783c25a15c40888f5bdc01a171155d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:19:22.704921    6748 out.go:97] Downloading VM boot image ...
	I0816 05:19:22.704949    6748 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso
	I0816 05:19:27.503579    6748 out.go:97] Starting "download-only-222000" primary control-plane node in "download-only-222000" cluster
	I0816 05:19:27.503597    6748 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0816 05:19:27.566380    6748 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0816 05:19:27.566404    6748 cache.go:56] Caching tarball of preloaded images
	I0816 05:19:27.566807    6748 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0816 05:19:27.571039    6748 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0816 05:19:27.571047    6748 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0816 05:19:27.657074    6748 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0816 05:19:33.274734    6748 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0816 05:19:33.274894    6748 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0816 05:19:33.975208    6748 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0816 05:19:33.975404    6748 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/download-only-222000/config.json ...
	I0816 05:19:33.975423    6748 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-6249/.minikube/profiles/download-only-222000/config.json: {Name:mke6c41a7c797054013650b66154396ce0ff2a50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:19:33.976579    6748 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0816 05:19:33.977005    6748 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0816 05:19:34.325038    6748 out.go:193] 
	W0816 05:19:34.331084    6748 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19423-6249/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10780f9c0 0x10780f9c0 0x10780f9c0 0x10780f9c0 0x10780f9c0 0x10780f9c0 0x10780f9c0] Decompressors:map[bz2:0x140000b72b0 gz:0x140000b72b8 tar:0x140000b7260 tar.bz2:0x140000b7270 tar.gz:0x140000b7280 tar.xz:0x140000b7290 tar.zst:0x140000b72a0 tbz2:0x140000b7270 tgz:0x140000b7280 txz:0x140000b7290 tzst:0x140000b72a0 xz:0x140000b72c0 zip:0x140000b72d0 zst:0x140000b72c8] Getters:map[file:0x140002ba700 http:0x14000576550 https:0x140005765a0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0816 05:19:34.331113    6748 out_reason.go:110] 
	W0816 05:19:34.338039    6748 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:19:34.339666    6748 out.go:193] 
	
	
	* The control-plane node download-only-222000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-222000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-222000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (7.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-783000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-783000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=qemu2 : (7.311134459s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (7.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-783000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-783000: exit status 85 (74.678875ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-222000 | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT |                     |
	|         | -p download-only-222000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT | 16 Aug 24 05:19 PDT |
	| delete  | -p download-only-222000        | download-only-222000 | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT | 16 Aug 24 05:19 PDT |
	| start   | -o=json --download-only        | download-only-783000 | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT |                     |
	|         | -p download-only-783000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 05:19:34
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 05:19:34.768630    6773 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:19:34.768790    6773 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:19:34.768793    6773 out.go:358] Setting ErrFile to fd 2...
	I0816 05:19:34.768801    6773 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:19:34.768943    6773 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:19:34.769972    6773 out.go:352] Setting JSON to true
	I0816 05:19:34.787995    6773 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4743,"bootTime":1723806031,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0816 05:19:34.788068    6773 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:19:34.793000    6773 out.go:97] [download-only-783000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 05:19:34.793088    6773 notify.go:220] Checking for updates...
	I0816 05:19:34.796974    6773 out.go:169] MINIKUBE_LOCATION=19423
	I0816 05:19:34.800035    6773 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:19:34.803014    6773 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 05:19:34.805963    6773 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:19:34.808967    6773 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	W0816 05:19:34.814948    6773 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0816 05:19:34.815140    6773 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:19:34.817980    6773 out.go:97] Using the qemu2 driver based on user configuration
	I0816 05:19:34.817990    6773 start.go:297] selected driver: qemu2
	I0816 05:19:34.817994    6773 start.go:901] validating driver "qemu2" against <nil>
	I0816 05:19:34.818048    6773 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 05:19:34.820988    6773 out.go:169] Automatically selected the socket_vmnet network
	I0816 05:19:34.826216    6773 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0816 05:19:34.826316    6773 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0816 05:19:34.826339    6773 cni.go:84] Creating CNI manager for ""
	I0816 05:19:34.826350    6773 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 05:19:34.826362    6773 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 05:19:34.826408    6773 start.go:340] cluster config:
	{Name:download-only-783000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:19:34.830421    6773 iso.go:125] acquiring lock: {Name:mkee7fdae783c25a15c40888f5bdc01a171155d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:19:34.834010    6773 out.go:97] Starting "download-only-783000" primary control-plane node in "download-only-783000" cluster
	I0816 05:19:34.834017    6773 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:19:34.897773    6773 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 05:19:34.897786    6773 cache.go:56] Caching tarball of preloaded images
	I0816 05:19:34.897948    6773 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:19:34.901142    6773 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0816 05:19:34.901150    6773 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0816 05:19:34.993341    6773 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4?checksum=md5:90c22abece392b762c0b4e45be981bb4 -> /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 05:19:39.414593    6773 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0816 05:19:39.414966    6773 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19423-6249/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-783000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-783000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-783000
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.3s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-393000 --alsologtostderr --binary-mirror http://127.0.0.1:50949 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-393000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-393000
--- PASS: TestBinaryMirror (0.30s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-851000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-851000: exit status 85 (57.36975ms)

                                                
                                                
-- stdout --
	* Profile "addons-851000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-851000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-851000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-851000: exit status 85 (60.202375ms)

                                                
                                                
-- stdout --
	* Profile "addons-851000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-851000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.08s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.08s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-943000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-943000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-943000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-943000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-943000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000 status: exit status 7 (32.598458ms)

                                                
                                                
-- stdout --
	nospam-943000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-943000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-943000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-943000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000 status: exit status 7 (29.877917ms)

                                                
                                                
-- stdout --
	nospam-943000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-943000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-943000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-943000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000 status: exit status 7 (30.551667ms)

                                                
                                                
-- stdout --
	nospam-943000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-943000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.09s)

                                                
                                    
x
+
TestErrorSpam/pause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-943000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-943000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000 pause: exit status 83 (40.160375ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-943000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-943000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-943000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-943000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-943000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000 pause: exit status 83 (40.732417ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-943000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-943000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-943000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-943000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-943000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000 pause: exit status 83 (39.958042ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-943000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-943000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-943000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.12s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-943000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-943000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000 unpause: exit status 83 (38.782167ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-943000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-943000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-943000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-943000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-943000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000 unpause: exit status 83 (40.906291ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-943000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-943000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-943000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-943000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-943000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000 unpause: exit status 83 (40.729292ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-943000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-943000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-943000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.12s)

                                                
                                    
x
+
TestErrorSpam/stop (10.62s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-943000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-943000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000 stop: (4.000187625s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-943000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-943000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000 stop: (3.409266167s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-943000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-943000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-943000 stop: (3.206026209s)
--- PASS: TestErrorSpam/stop (10.62s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19423-6249/.minikube/files/etc/test/nested/copy/6746/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.84s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.84s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-894000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local3776493986/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 cache add minikube-local-cache-test:functional-894000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 cache delete minikube-local-cache-test:functional-894000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-894000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 config get cpus: exit status 14 (32.126167ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 config get cpus: exit status 14 (29.775792ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-894000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-894000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (163.825333ms)

                                                
                                                
-- stdout --
	* [functional-894000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:21:20.699663    7334 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:21:20.699839    7334 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:21:20.699843    7334 out.go:358] Setting ErrFile to fd 2...
	I0816 05:21:20.699847    7334 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:21:20.700008    7334 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:21:20.701316    7334 out.go:352] Setting JSON to false
	I0816 05:21:20.720943    7334 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4849,"bootTime":1723806031,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0816 05:21:20.721034    7334 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:21:20.726363    7334 out.go:177] * [functional-894000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 05:21:20.733254    7334 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:21:20.733312    7334 notify.go:220] Checking for updates...
	I0816 05:21:20.741318    7334 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:21:20.744370    7334 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 05:21:20.747310    7334 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:21:20.750412    7334 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	I0816 05:21:20.753375    7334 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 05:21:20.756624    7334 config.go:182] Loaded profile config "functional-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:21:20.756933    7334 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:21:20.761329    7334 out.go:177] * Using the qemu2 driver based on existing profile
	I0816 05:21:20.768336    7334 start.go:297] selected driver: qemu2
	I0816 05:21:20.768345    7334 start.go:901] validating driver "qemu2" against &{Name:functional-894000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-894000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:21:20.768403    7334 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:21:20.775376    7334 out.go:201] 
	W0816 05:21:20.779378    7334 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0816 05:21:20.783351    7334 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-894000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-894000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-894000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (110.998458ms)

                                                
                                                
-- stdout --
	* [functional-894000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:21:20.928212    7345 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:21:20.928327    7345 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:21:20.928329    7345 out.go:358] Setting ErrFile to fd 2...
	I0816 05:21:20.928332    7345 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:21:20.928453    7345 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-6249/.minikube/bin
	I0816 05:21:20.929866    7345 out.go:352] Setting JSON to false
	I0816 05:21:20.946704    7345 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4849,"bootTime":1723806031,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0816 05:21:20.946788    7345 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:21:20.950525    7345 out.go:177] * [functional-894000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0816 05:21:20.957341    7345 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:21:20.957363    7345 notify.go:220] Checking for updates...
	I0816 05:21:20.965310    7345 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	I0816 05:21:20.969306    7345 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 05:21:20.972416    7345 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:21:20.975375    7345 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	I0816 05:21:20.978384    7345 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 05:21:20.981768    7345 config.go:182] Loaded profile config "functional-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:21:20.982033    7345 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:21:20.986348    7345 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0816 05:21:20.993390    7345 start.go:297] selected driver: qemu2
	I0816 05:21:20.993400    7345 start.go:901] validating driver "qemu2" against &{Name:functional-894000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-894000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:21:20.993473    7345 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:21:21.000401    7345 out.go:201] 
	W0816 05:21:21.004374    7345 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0816 05:21:21.005742    7345 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.660885959s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-894000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-894000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 image rm kicbase/echo-server:functional-894000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-894000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 image save --daemon kicbase/echo-server:functional-894000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-894000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "48.853125ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "33.928667ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "46.926625ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "33.696167ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.011525875s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-894000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-894000
--- PASS: TestFunctional/delete_echo-server_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-894000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-894000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (2.13s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-435000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-435000 --output=json --user=testUser: (2.125156417s)
--- PASS: TestJSONOutput/stop/Command (2.13s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-965000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-965000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (97.821042ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"95f0fad9-fb3a-4039-9caa-cfdd5051ba02","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-965000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7a37f914-c8c0-41ee-ae36-1799b5248327","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19423"}}
	{"specversion":"1.0","id":"54e4ec60-e4fc-4f82-b7b1-fb317264a99e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig"}}
	{"specversion":"1.0","id":"f251c01b-99a1-4c97-a7cb-9c6126f5ef73","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"cc52af1a-080e-4213-9b45-68486bce0c05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"dbac7ca7-8c2f-4776-b096-1e6e9daea60d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube"}}
	{"specversion":"1.0","id":"2319932b-8629-43c4-b881-53f8d3223f9d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1a0bef56-d764-4d3c-a0cd-9def97bc7b07","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-965000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-965000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-763000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-763000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (102.87525ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-763000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-6249/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-6249/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-763000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-763000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (42.103125ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-763000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-763000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.621061125s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.805117083s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-763000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-763000: (2.898783834s)
--- PASS: TestNoKubernetes/serial/Stop (2.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-763000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-763000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (43.188458ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-763000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-763000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-972000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-861000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-861000 --alsologtostderr -v=3: (3.389601s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-861000 -n old-k8s-version-861000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-861000 -n old-k8s-version-861000: exit status 7 (46.175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-861000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-023000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-023000 --alsologtostderr -v=3: (3.288648125s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-023000 -n embed-certs-023000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-023000 -n embed-certs-023000: exit status 7 (55.466ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-023000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-576000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-576000 --alsologtostderr -v=3: (1.816537583s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-576000 -n no-preload-576000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-576000 -n no-preload-576000: exit status 7 (57.805417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-576000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-122000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-122000 --alsologtostderr -v=3: (3.014542833s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-122000 -n default-k8s-diff-port-122000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-122000 -n default-k8s-diff-port-122000: exit status 7 (60.99625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-122000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-301000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-301000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-301000 --alsologtostderr -v=3: (1.832357333s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-301000 -n newest-cni-301000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-301000 -n newest-cni-301000: exit status 7 (62.557083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-301000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/258)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-894000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1385265633/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1723810845318560000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1385265633/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1723810845318560000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1385265633/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1723810845318560000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1385265633/001/test-1723810845318560000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (49.102083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.004375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.974875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.94375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.516083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.566417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.255333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 ssh "sudo umount -f /mount-9p": exit status 83 (46.886958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-894000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-894000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1385265633/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (9.73s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (11.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-894000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2542510896/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (64.419709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.595583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.055042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.486208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (79.493125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.281959ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.883584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 ssh "sudo umount -f /mount-9p": exit status 83 (46.755625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-894000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-894000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2542510896/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (11.28s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (14.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-894000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3684023887/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-894000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3684023887/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-894000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3684023887/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 ssh "findmnt -T" /mount1: exit status 83 (76.264333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 ssh "findmnt -T" /mount1: exit status 83 (83.371875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 ssh "findmnt -T" /mount1: exit status 83 (88.1655ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 ssh "findmnt -T" /mount1: exit status 83 (86.053959ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 ssh "findmnt -T" /mount1: exit status 83 (86.128791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 ssh "findmnt -T" /mount1: exit status 83 (87.178708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-894000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-894000 ssh "findmnt -T" /mount1: exit status 83 (87.098542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-894000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-894000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3684023887/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-894000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3684023887/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-894000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3684023887/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (14.30s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-998000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-998000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-998000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-998000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-998000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-998000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-998000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-998000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-998000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-998000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-998000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-998000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-998000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-998000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-998000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-998000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-998000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-998000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-998000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-998000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-998000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-998000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-998000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-998000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-998000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-998000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-998000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-998000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-998000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-998000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-998000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-998000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-998000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-998000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-998000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-998000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-998000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-998000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-998000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-998000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-998000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-998000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-998000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-998000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-998000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-998000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-998000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-998000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-998000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-998000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-998000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-998000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-998000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-998000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-998000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-998000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-998000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-998000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-998000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-998000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-998000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-998000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-998000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-998000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-998000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998000"

                                                
                                                
----------------------- debugLogs end: cilium-998000 [took: 2.218521s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-998000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-998000
--- SKIP: TestNetworkPlugins/group/cilium (2.32s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-582000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-582000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard