Test Report: QEMU_macOS 19468

                    
                      91a16964608358fea9174134e48bcab54b5c9be6:2024-08-19:35860
                    
                

Test fail (156/258)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 12.24
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 10.01
27 TestAddons/Setup 10.18
28 TestCertOptions 10.05
29 TestCertExpiration 195.27
30 TestDockerFlags 10.13
31 TestForceSystemdFlag 10.34
32 TestForceSystemdEnv 10.03
38 TestErrorSpam/setup 9.77
47 TestFunctional/serial/StartWithProxy 9.95
49 TestFunctional/serial/SoftStart 5.26
50 TestFunctional/serial/KubeContext 0.06
51 TestFunctional/serial/KubectlGetPods 0.06
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.04
59 TestFunctional/serial/CacheCmd/cache/cache_reload 0.16
61 TestFunctional/serial/MinikubeKubectlCmd 0.77
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.04
63 TestFunctional/serial/ExtraConfig 5.26
64 TestFunctional/serial/ComponentHealth 0.06
65 TestFunctional/serial/LogsCmd 0.08
66 TestFunctional/serial/LogsFileCmd 0.07
67 TestFunctional/serial/InvalidService 0.03
70 TestFunctional/parallel/DashboardCmd 0.2
73 TestFunctional/parallel/StatusCmd 0.17
77 TestFunctional/parallel/ServiceCmdConnect 0.14
79 TestFunctional/parallel/PersistentVolumeClaim 0.03
81 TestFunctional/parallel/SSHCmd 0.12
82 TestFunctional/parallel/CpCmd 0.28
84 TestFunctional/parallel/FileSync 0.07
85 TestFunctional/parallel/CertSync 0.28
89 TestFunctional/parallel/NodeLabels 0.06
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
95 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.08
98 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
99 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 91.07
100 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
101 TestFunctional/parallel/ServiceCmd/List 0.04
102 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
103 TestFunctional/parallel/ServiceCmd/HTTPS 0.04
104 TestFunctional/parallel/ServiceCmd/Format 0.04
105 TestFunctional/parallel/ServiceCmd/URL 0.05
113 TestFunctional/parallel/Version/components 0.05
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.03
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.03
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
118 TestFunctional/parallel/ImageCommands/ImageBuild 0.11
120 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.32
121 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.28
122 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.13
123 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
125 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
127 TestFunctional/parallel/DockerEnv/bash 0.04
128 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
129 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
130 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.06
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 37.82
141 TestMultiControlPlane/serial/StartCluster 9.92
142 TestMultiControlPlane/serial/DeployApp 82.52
143 TestMultiControlPlane/serial/PingHostFromPods 0.09
144 TestMultiControlPlane/serial/AddWorkerNode 0.08
145 TestMultiControlPlane/serial/NodeLabels 0.06
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.08
147 TestMultiControlPlane/serial/CopyFile 0.06
148 TestMultiControlPlane/serial/StopSecondaryNode 0.11
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.08
150 TestMultiControlPlane/serial/RestartSecondaryNode 53.4
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.08
152 TestMultiControlPlane/serial/RestartClusterKeepsNodes 8.29
153 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
155 TestMultiControlPlane/serial/StopCluster 3.73
156 TestMultiControlPlane/serial/RestartCluster 5.26
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
158 TestMultiControlPlane/serial/AddSecondaryNode 0.07
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.08
162 TestImageBuild/serial/Setup 9.95
165 TestJSONOutput/start/Command 9.79
171 TestJSONOutput/pause/Command 0.08
177 TestJSONOutput/unpause/Command 0.05
194 TestMinikubeProfile 10.11
197 TestMountStart/serial/StartWithMountFirst 10.08
200 TestMultiNode/serial/FreshStart2Nodes 9.95
201 TestMultiNode/serial/DeployApp2Nodes 68.9
202 TestMultiNode/serial/PingHostFrom2Pods 0.09
203 TestMultiNode/serial/AddNode 0.07
204 TestMultiNode/serial/MultiNodeLabels 0.06
205 TestMultiNode/serial/ProfileList 0.08
206 TestMultiNode/serial/CopyFile 0.06
207 TestMultiNode/serial/StopNode 0.14
208 TestMultiNode/serial/StartAfterStop 37.63
209 TestMultiNode/serial/RestartKeepsNodes 8.99
210 TestMultiNode/serial/DeleteNode 0.1
211 TestMultiNode/serial/StopMultiNode 3.35
212 TestMultiNode/serial/RestartMultiNode 5.25
213 TestMultiNode/serial/ValidateNameConflict 20.41
217 TestPreload 10.09
219 TestScheduledStopUnix 10.02
220 TestSkaffold 12.24
223 TestRunningBinaryUpgrade 598.05
225 TestKubernetesUpgrade 18.55
238 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.09
239 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.09
241 TestStoppedBinaryUpgrade/Upgrade 576.01
243 TestPause/serial/Start 10.12
253 TestNoKubernetes/serial/StartWithK8s 9.84
254 TestNoKubernetes/serial/StartWithStopK8s 5.33
255 TestNoKubernetes/serial/Start 5.3
259 TestNoKubernetes/serial/StartNoArgs 5.34
261 TestNetworkPlugins/group/custom-flannel/Start 9.98
262 TestNetworkPlugins/group/auto/Start 9.78
263 TestNetworkPlugins/group/false/Start 9.8
264 TestNetworkPlugins/group/kindnet/Start 9.76
265 TestNetworkPlugins/group/flannel/Start 9.94
266 TestNetworkPlugins/group/enable-default-cni/Start 9.85
267 TestNetworkPlugins/group/bridge/Start 9.86
268 TestNetworkPlugins/group/kubenet/Start 9.85
269 TestNetworkPlugins/group/calico/Start 9.84
272 TestStartStop/group/old-k8s-version/serial/FirstStart 10.23
273 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
274 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
277 TestStartStop/group/old-k8s-version/serial/SecondStart 5.25
278 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
279 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
280 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
281 TestStartStop/group/old-k8s-version/serial/Pause 0.1
283 TestStartStop/group/no-preload/serial/FirstStart 9.95
284 TestStartStop/group/no-preload/serial/DeployApp 0.09
285 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
288 TestStartStop/group/embed-certs/serial/FirstStart 9.99
290 TestStartStop/group/no-preload/serial/SecondStart 6.63
291 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
292 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
293 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.2
294 TestStartStop/group/no-preload/serial/Pause 0.1
296 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 11.53
297 TestStartStop/group/embed-certs/serial/DeployApp 0.1
298 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.25
301 TestStartStop/group/embed-certs/serial/SecondStart 5.95
302 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
303 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
304 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
305 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.09
306 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.13
307 TestStartStop/group/embed-certs/serial/Pause 0.11
310 TestStartStop/group/newest-cni/serial/FirstStart 9.89
312 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.57
313 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.04
314 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
315 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.2
316 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
321 TestStartStop/group/newest-cni/serial/SecondStart 5.26
324 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
325 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (12.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-203000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-203000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (12.234733708s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4ae25a8a-a751-457f-a380-2035ee6220bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-203000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5a502113-f2e8-4f6d-951e-021d1f643b99","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19468"}}
	{"specversion":"1.0","id":"274c60b7-ceb7-488b-9af5-76424521f44a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig"}}
	{"specversion":"1.0","id":"c29de3d0-5232-40ee-9867-b0f5f7664a86","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"a7b0c6e4-35c1-4e90-94a4-451a75e169da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f87152f2-5ed6-471e-84e5-f4d4773ebbfc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube"}}
	{"specversion":"1.0","id":"b96cc406-c10d-4c2d-aace-28d4301c5e6e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"68da5ab5-e992-4b56-8bb9-86f8c1a1400a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"73af97e2-ea54-4c36-8d9f-189f8c08a912","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"e5ff8623-0374-43ef-937c-4e1e8cd8c5b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"2b9e5c79-d850-4bd7-879f-6e479680fae3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-203000\" primary control-plane node in \"download-only-203000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e43c0bae-8706-4b48-a420-524b4d2eff19","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"88e3838b-deb0-4611-b933-64ecb5fc9e0d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19468-11838/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106b83940 0x106b83940 0x106b83940 0x106b83940 0x106b83940 0x106b83940 0x106b83940] Decompressors:map[bz2:0x1400000e9c8 gz:0x1400000ea50 tar:0x1400000ea00 tar.bz2:0x1400000ea10 tar.gz:0x1400000ea20 tar.xz:0x1400000ea30 tar.zst:0x1400000ea40 tbz2:0x1400000ea10 tgz:0x1
400000ea20 txz:0x1400000ea30 tzst:0x1400000ea40 xz:0x1400000ea58 zip:0x1400000ea60 zst:0x1400000ea70] Getters:map[file:0x140009fe550 http:0x14000756190 https:0x140007561e0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"2d11373a-3bdb-47aa-a042-71702820d655","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:05:30.823880   12321 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:05:30.824014   12321 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:05:30.824020   12321 out.go:358] Setting ErrFile to fd 2...
	I0819 11:05:30.824022   12321 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:05:30.824144   12321 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	W0819 11:05:30.824228   12321 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19468-11838/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19468-11838/.minikube/config/config.json: no such file or directory
	I0819 11:05:30.825644   12321 out.go:352] Setting JSON to true
	I0819 11:05:30.843572   12321 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5697,"bootTime":1724085033,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:05:30.843645   12321 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:05:30.849562   12321 out.go:97] [download-only-203000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:05:30.849709   12321 notify.go:220] Checking for updates...
	W0819 11:05:30.849732   12321 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball: no such file or directory
	I0819 11:05:30.853620   12321 out.go:169] MINIKUBE_LOCATION=19468
	I0819 11:05:30.856591   12321 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:05:30.861545   12321 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:05:30.864583   12321 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:05:30.867587   12321 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	W0819 11:05:30.873613   12321 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0819 11:05:30.873848   12321 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:05:30.877477   12321 out.go:97] Using the qemu2 driver based on user configuration
	I0819 11:05:30.877495   12321 start.go:297] selected driver: qemu2
	I0819 11:05:30.877509   12321 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:05:30.877577   12321 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:05:30.880492   12321 out.go:169] Automatically selected the socket_vmnet network
	I0819 11:05:30.885906   12321 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0819 11:05:30.885996   12321 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 11:05:30.886057   12321 cni.go:84] Creating CNI manager for ""
	I0819 11:05:30.886077   12321 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0819 11:05:30.886126   12321 start.go:340] cluster config:
	{Name:download-only-203000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-203000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:05:30.890106   12321 iso.go:125] acquiring lock: {Name:mk1182fa87ba49f1e009b3ded77c456c9e9e8e4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:05:30.894544   12321 out.go:97] Downloading VM boot image ...
	I0819 11:05:30.894570   12321 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso
	I0819 11:05:36.109649   12321 out.go:97] Starting "download-only-203000" primary control-plane node in "download-only-203000" cluster
	I0819 11:05:36.109676   12321 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0819 11:05:36.174326   12321 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0819 11:05:36.174349   12321 cache.go:56] Caching tarball of preloaded images
	I0819 11:05:36.175203   12321 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0819 11:05:36.179590   12321 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0819 11:05:36.179597   12321 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0819 11:05:36.278487   12321 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0819 11:05:41.823491   12321 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0819 11:05:41.823656   12321 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0819 11:05:42.518684   12321 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0819 11:05:42.518880   12321 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/download-only-203000/config.json ...
	I0819 11:05:42.518897   12321 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/download-only-203000/config.json: {Name:mk1a60e012ab2e3f16a9ea9e6707987cce6ee765 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:05:42.519134   12321 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0819 11:05:42.519310   12321 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0819 11:05:42.980121   12321 out.go:193] 
	W0819 11:05:42.986200   12321 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19468-11838/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106b83940 0x106b83940 0x106b83940 0x106b83940 0x106b83940 0x106b83940 0x106b83940] Decompressors:map[bz2:0x1400000e9c8 gz:0x1400000ea50 tar:0x1400000ea00 tar.bz2:0x1400000ea10 tar.gz:0x1400000ea20 tar.xz:0x1400000ea30 tar.zst:0x1400000ea40 tbz2:0x1400000ea10 tgz:0x1400000ea20 txz:0x1400000ea30 tzst:0x1400000ea40 xz:0x1400000ea58 zip:0x1400000ea60 zst:0x1400000ea70] Getters:map[file:0x140009fe550 http:0x14000756190 https:0x140007561e0] Dir:false ProgressLis
tener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0819 11:05:42.986228   12321 out_reason.go:110] 
	W0819 11:05:42.994085   12321 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:05:42.998062   12321 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-203000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (12.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19468-11838/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.01s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-782000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-782000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.897130292s)

                                                
                                                
-- stdout --
	* [offline-docker-782000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-782000" primary control-plane node in "offline-docker-782000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-782000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:15:59.070957   14157 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:15:59.071123   14157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:15:59.071127   14157 out.go:358] Setting ErrFile to fd 2...
	I0819 11:15:59.071130   14157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:15:59.071284   14157 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:15:59.072643   14157 out.go:352] Setting JSON to false
	I0819 11:15:59.090329   14157 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6326,"bootTime":1724085033,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:15:59.090439   14157 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:15:59.095669   14157 out.go:177] * [offline-docker-782000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:15:59.103734   14157 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 11:15:59.103751   14157 notify.go:220] Checking for updates...
	I0819 11:15:59.109679   14157 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:15:59.112703   14157 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:15:59.115699   14157 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:15:59.118665   14157 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	I0819 11:15:59.121683   14157 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:15:59.124976   14157 config.go:182] Loaded profile config "multinode-540000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:15:59.125046   14157 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:15:59.128663   14157 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:15:59.134580   14157 start.go:297] selected driver: qemu2
	I0819 11:15:59.134590   14157 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:15:59.134598   14157 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:15:59.136621   14157 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:15:59.139635   14157 out.go:177] * Automatically selected the socket_vmnet network
	I0819 11:15:59.142745   14157 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:15:59.142763   14157 cni.go:84] Creating CNI manager for ""
	I0819 11:15:59.142769   14157 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:15:59.142772   14157 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 11:15:59.142807   14157 start.go:340] cluster config:
	{Name:offline-docker-782000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:15:59.146547   14157 iso.go:125] acquiring lock: {Name:mk1182fa87ba49f1e009b3ded77c456c9e9e8e4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:15:59.153634   14157 out.go:177] * Starting "offline-docker-782000" primary control-plane node in "offline-docker-782000" cluster
	I0819 11:15:59.157704   14157 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:15:59.157744   14157 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:15:59.157753   14157 cache.go:56] Caching tarball of preloaded images
	I0819 11:15:59.157829   14157 preload.go:172] Found /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:15:59.157835   14157 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:15:59.157902   14157 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/offline-docker-782000/config.json ...
	I0819 11:15:59.157913   14157 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/offline-docker-782000/config.json: {Name:mkdc2976ddbaadd1ac844dac921e178edbefcddd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:15:59.158216   14157 start.go:360] acquireMachinesLock for offline-docker-782000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:15:59.158259   14157 start.go:364] duration metric: took 32.458µs to acquireMachinesLock for "offline-docker-782000"
	I0819 11:15:59.158273   14157 start.go:93] Provisioning new machine with config: &{Name:offline-docker-782000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:15:59.158311   14157 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:15:59.165702   14157 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0819 11:15:59.181660   14157 start.go:159] libmachine.API.Create for "offline-docker-782000" (driver="qemu2")
	I0819 11:15:59.181690   14157 client.go:168] LocalClient.Create starting
	I0819 11:15:59.181771   14157 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:15:59.181803   14157 main.go:141] libmachine: Decoding PEM data...
	I0819 11:15:59.181818   14157 main.go:141] libmachine: Parsing certificate...
	I0819 11:15:59.181865   14157 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:15:59.181888   14157 main.go:141] libmachine: Decoding PEM data...
	I0819 11:15:59.181895   14157 main.go:141] libmachine: Parsing certificate...
	I0819 11:15:59.182263   14157 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:15:59.334940   14157 main.go:141] libmachine: Creating SSH key...
	I0819 11:15:59.439086   14157 main.go:141] libmachine: Creating Disk image...
	I0819 11:15:59.439096   14157 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:15:59.439328   14157 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/offline-docker-782000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/offline-docker-782000/disk.qcow2
	I0819 11:15:59.458929   14157 main.go:141] libmachine: STDOUT: 
	I0819 11:15:59.458954   14157 main.go:141] libmachine: STDERR: 
	I0819 11:15:59.459016   14157 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/offline-docker-782000/disk.qcow2 +20000M
	I0819 11:15:59.467619   14157 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:15:59.467638   14157 main.go:141] libmachine: STDERR: 
	I0819 11:15:59.467667   14157 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/offline-docker-782000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/offline-docker-782000/disk.qcow2
	I0819 11:15:59.467672   14157 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:15:59.467685   14157 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:15:59.467712   14157 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/offline-docker-782000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/offline-docker-782000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/offline-docker-782000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:87:84:3e:33:1c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/offline-docker-782000/disk.qcow2
	I0819 11:15:59.469508   14157 main.go:141] libmachine: STDOUT: 
	I0819 11:15:59.469526   14157 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:15:59.469545   14157 client.go:171] duration metric: took 287.85225ms to LocalClient.Create
	I0819 11:16:01.471673   14157 start.go:128] duration metric: took 2.31336s to createHost
	I0819 11:16:01.471708   14157 start.go:83] releasing machines lock for "offline-docker-782000", held for 2.313456333s
	W0819 11:16:01.471726   14157 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:16:01.481474   14157 out.go:177] * Deleting "offline-docker-782000" in qemu2 ...
	W0819 11:16:01.493377   14157 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:16:01.493388   14157 start.go:729] Will try again in 5 seconds ...
	I0819 11:16:06.495601   14157 start.go:360] acquireMachinesLock for offline-docker-782000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:16:06.496077   14157 start.go:364] duration metric: took 383.209µs to acquireMachinesLock for "offline-docker-782000"
	I0819 11:16:06.496197   14157 start.go:93] Provisioning new machine with config: &{Name:offline-docker-782000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:16:06.496412   14157 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:16:06.505952   14157 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0819 11:16:06.556326   14157 start.go:159] libmachine.API.Create for "offline-docker-782000" (driver="qemu2")
	I0819 11:16:06.556374   14157 client.go:168] LocalClient.Create starting
	I0819 11:16:06.556493   14157 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:16:06.556556   14157 main.go:141] libmachine: Decoding PEM data...
	I0819 11:16:06.556570   14157 main.go:141] libmachine: Parsing certificate...
	I0819 11:16:06.556667   14157 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:16:06.556713   14157 main.go:141] libmachine: Decoding PEM data...
	I0819 11:16:06.556724   14157 main.go:141] libmachine: Parsing certificate...
	I0819 11:16:06.557364   14157 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:16:06.721026   14157 main.go:141] libmachine: Creating SSH key...
	I0819 11:16:06.886133   14157 main.go:141] libmachine: Creating Disk image...
	I0819 11:16:06.886141   14157 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:16:06.886419   14157 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/offline-docker-782000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/offline-docker-782000/disk.qcow2
	I0819 11:16:06.895709   14157 main.go:141] libmachine: STDOUT: 
	I0819 11:16:06.895729   14157 main.go:141] libmachine: STDERR: 
	I0819 11:16:06.895772   14157 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/offline-docker-782000/disk.qcow2 +20000M
	I0819 11:16:06.903751   14157 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:16:06.903766   14157 main.go:141] libmachine: STDERR: 
	I0819 11:16:06.903776   14157 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/offline-docker-782000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/offline-docker-782000/disk.qcow2
	I0819 11:16:06.903781   14157 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:16:06.903790   14157 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:16:06.903829   14157 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/offline-docker-782000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/offline-docker-782000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/offline-docker-782000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:e6:a3:f7:3b:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/offline-docker-782000/disk.qcow2
	I0819 11:16:06.905480   14157 main.go:141] libmachine: STDOUT: 
	I0819 11:16:06.905494   14157 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:16:06.905507   14157 client.go:171] duration metric: took 349.130208ms to LocalClient.Create
	I0819 11:16:08.907621   14157 start.go:128] duration metric: took 2.411208542s to createHost
	I0819 11:16:08.907635   14157 start.go:83] releasing machines lock for "offline-docker-782000", held for 2.4115545s
	W0819 11:16:08.907725   14157 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-782000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-782000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:16:08.911870   14157 out.go:201] 
	W0819 11:16:08.915879   14157 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:16:08.915884   14157 out.go:270] * 
	* 
	W0819 11:16:08.916359   14157 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:16:08.928924   14157 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-782000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-08-19 11:16:08.93841 -0700 PDT m=+638.196044793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-782000 -n offline-docker-782000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-782000 -n offline-docker-782000: exit status 7 (34.970625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-782000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-782000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-782000
--- FAIL: TestOffline (10.01s)

                                                
                                    
x
+
TestAddons/Setup (10.18s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-110000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-110000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.173576541s)

                                                
                                                
-- stdout --
	* [addons-110000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-110000" primary control-plane node in "addons-110000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-110000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:05:51.657848   12412 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:05:51.657975   12412 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:05:51.657978   12412 out.go:358] Setting ErrFile to fd 2...
	I0819 11:05:51.657981   12412 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:05:51.658092   12412 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:05:51.659098   12412 out.go:352] Setting JSON to false
	I0819 11:05:51.675155   12412 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5718,"bootTime":1724085033,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:05:51.675224   12412 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:05:51.679432   12412 out.go:177] * [addons-110000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:05:51.685389   12412 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 11:05:51.685447   12412 notify.go:220] Checking for updates...
	I0819 11:05:51.689773   12412 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:05:51.693315   12412 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:05:51.696341   12412 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:05:51.699324   12412 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	I0819 11:05:51.702400   12412 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:05:51.705493   12412 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:05:51.709297   12412 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:05:51.716269   12412 start.go:297] selected driver: qemu2
	I0819 11:05:51.716275   12412 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:05:51.716280   12412 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:05:51.718448   12412 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:05:51.721333   12412 out.go:177] * Automatically selected the socket_vmnet network
	I0819 11:05:51.724326   12412 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:05:51.724371   12412 cni.go:84] Creating CNI manager for ""
	I0819 11:05:51.724380   12412 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:05:51.724384   12412 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 11:05:51.724412   12412 start.go:340] cluster config:
	{Name:addons-110000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-110000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:05:51.728062   12412 iso.go:125] acquiring lock: {Name:mk1182fa87ba49f1e009b3ded77c456c9e9e8e4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:05:51.736166   12412 out.go:177] * Starting "addons-110000" primary control-plane node in "addons-110000" cluster
	I0819 11:05:51.740294   12412 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:05:51.740317   12412 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:05:51.740326   12412 cache.go:56] Caching tarball of preloaded images
	I0819 11:05:51.740383   12412 preload.go:172] Found /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:05:51.740389   12412 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:05:51.740598   12412 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/addons-110000/config.json ...
	I0819 11:05:51.740608   12412 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/addons-110000/config.json: {Name:mk9786123efb5d118c22e3e59fe0a7223c5624f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:05:51.740945   12412 start.go:360] acquireMachinesLock for addons-110000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:05:51.741002   12412 start.go:364] duration metric: took 51.959µs to acquireMachinesLock for "addons-110000"
	I0819 11:05:51.741014   12412 start.go:93] Provisioning new machine with config: &{Name:addons-110000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:addons-110000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:05:51.741053   12412 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:05:51.749227   12412 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0819 11:05:51.765685   12412 start.go:159] libmachine.API.Create for "addons-110000" (driver="qemu2")
	I0819 11:05:51.765708   12412 client.go:168] LocalClient.Create starting
	I0819 11:05:51.765848   12412 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:05:51.913294   12412 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:05:51.967785   12412 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:05:52.316949   12412 main.go:141] libmachine: Creating SSH key...
	I0819 11:05:52.356736   12412 main.go:141] libmachine: Creating Disk image...
	I0819 11:05:52.356753   12412 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:05:52.357198   12412 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/addons-110000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/addons-110000/disk.qcow2
	I0819 11:05:52.367253   12412 main.go:141] libmachine: STDOUT: 
	I0819 11:05:52.367274   12412 main.go:141] libmachine: STDERR: 
	I0819 11:05:52.367347   12412 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/addons-110000/disk.qcow2 +20000M
	I0819 11:05:52.375385   12412 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:05:52.375399   12412 main.go:141] libmachine: STDERR: 
	I0819 11:05:52.375414   12412 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/addons-110000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/addons-110000/disk.qcow2
	I0819 11:05:52.375418   12412 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:05:52.375443   12412 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:05:52.375475   12412 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/addons-110000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/addons-110000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/addons-110000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:37:30:13:bf:0c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/addons-110000/disk.qcow2
	I0819 11:05:52.377077   12412 main.go:141] libmachine: STDOUT: 
	I0819 11:05:52.377090   12412 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:05:52.377108   12412 client.go:171] duration metric: took 611.397875ms to LocalClient.Create
	I0819 11:05:54.379318   12412 start.go:128] duration metric: took 2.638244792s to createHost
	I0819 11:05:54.379410   12412 start.go:83] releasing machines lock for "addons-110000", held for 2.638411667s
	W0819 11:05:54.379503   12412 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:05:54.391898   12412 out.go:177] * Deleting "addons-110000" in qemu2 ...
	W0819 11:05:54.423589   12412 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:05:54.423615   12412 start.go:729] Will try again in 5 seconds ...
	I0819 11:05:59.425816   12412 start.go:360] acquireMachinesLock for addons-110000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:05:59.426355   12412 start.go:364] duration metric: took 409.666µs to acquireMachinesLock for "addons-110000"
	I0819 11:05:59.426529   12412 start.go:93] Provisioning new machine with config: &{Name:addons-110000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:addons-110000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:05:59.426832   12412 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:05:59.436304   12412 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0819 11:05:59.487812   12412 start.go:159] libmachine.API.Create for "addons-110000" (driver="qemu2")
	I0819 11:05:59.487864   12412 client.go:168] LocalClient.Create starting
	I0819 11:05:59.487987   12412 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:05:59.488048   12412 main.go:141] libmachine: Decoding PEM data...
	I0819 11:05:59.488066   12412 main.go:141] libmachine: Parsing certificate...
	I0819 11:05:59.488149   12412 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:05:59.488194   12412 main.go:141] libmachine: Decoding PEM data...
	I0819 11:05:59.488207   12412 main.go:141] libmachine: Parsing certificate...
	I0819 11:05:59.488767   12412 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:05:59.687568   12412 main.go:141] libmachine: Creating SSH key...
	I0819 11:05:59.741542   12412 main.go:141] libmachine: Creating Disk image...
	I0819 11:05:59.741557   12412 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:05:59.741858   12412 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/addons-110000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/addons-110000/disk.qcow2
	I0819 11:05:59.751104   12412 main.go:141] libmachine: STDOUT: 
	I0819 11:05:59.751121   12412 main.go:141] libmachine: STDERR: 
	I0819 11:05:59.751164   12412 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/addons-110000/disk.qcow2 +20000M
	I0819 11:05:59.759102   12412 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:05:59.759115   12412 main.go:141] libmachine: STDERR: 
	I0819 11:05:59.759127   12412 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/addons-110000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/addons-110000/disk.qcow2
	I0819 11:05:59.759130   12412 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:05:59.759139   12412 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:05:59.759178   12412 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/addons-110000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/addons-110000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/addons-110000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:a5:8f:aa:7e:91 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/addons-110000/disk.qcow2
	I0819 11:05:59.760803   12412 main.go:141] libmachine: STDOUT: 
	I0819 11:05:59.760815   12412 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:05:59.760829   12412 client.go:171] duration metric: took 272.958875ms to LocalClient.Create
	I0819 11:06:01.763028   12412 start.go:128] duration metric: took 2.336178875s to createHost
	I0819 11:06:01.763089   12412 start.go:83] releasing machines lock for "addons-110000", held for 2.336719916s
	W0819 11:06:01.763469   12412 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p addons-110000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-110000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:06:01.771177   12412 out.go:201] 
	W0819 11:06:01.778171   12412 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:06:01.778198   12412 out.go:270] * 
	* 
	W0819 11:06:01.780895   12412 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:06:01.788084   12412 out.go:201] 

                                                
                                                
** /stderr **
addons_test.go:112: out/minikube-darwin-arm64 start -p addons-110000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.18s)

                                                
                                    
x
+
TestCertOptions (10.05s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-225000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-225000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.791253709s)

                                                
                                                
-- stdout --
	* [cert-options-225000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-225000" primary control-plane node in "cert-options-225000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-225000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-225000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-225000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-225000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-225000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (79.477083ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-225000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-225000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-225000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-225000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-225000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-225000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (41.836792ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-225000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-225000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-225000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-225000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-225000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-08-19 11:16:39.163845 -0700 PDT m=+668.421635334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-225000 -n cert-options-225000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-225000 -n cert-options-225000: exit status 7 (30.442333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-225000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-225000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-225000
--- FAIL: TestCertOptions (10.05s)

                                                
                                    
x
+
TestCertExpiration (195.27s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-924000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-924000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.947700583s)

                                                
                                                
-- stdout --
	* [cert-expiration-924000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-924000" primary control-plane node in "cert-expiration-924000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-924000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-924000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-924000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-924000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-924000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.187908792s)

                                                
                                                
-- stdout --
	* [cert-expiration-924000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-924000" primary control-plane node in "cert-expiration-924000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-924000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-924000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-924000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-924000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-924000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-924000" primary control-plane node in "cert-expiration-924000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-924000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-924000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-924000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-08-19 11:19:39.273585 -0700 PDT m=+848.532301959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-924000 -n cert-expiration-924000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-924000 -n cert-expiration-924000: exit status 7 (55.910916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-924000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-924000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-924000
--- FAIL: TestCertExpiration (195.27s)

                                                
                                    
x
+
TestDockerFlags (10.13s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-391000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-391000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.891200958s)

                                                
                                                
-- stdout --
	* [docker-flags-391000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-391000" primary control-plane node in "docker-flags-391000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-391000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:16:19.113992   14362 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:16:19.114129   14362 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:16:19.114132   14362 out.go:358] Setting ErrFile to fd 2...
	I0819 11:16:19.114135   14362 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:16:19.114266   14362 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:16:19.115338   14362 out.go:352] Setting JSON to false
	I0819 11:16:19.131236   14362 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6346,"bootTime":1724085033,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:16:19.131302   14362 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:16:19.137527   14362 out.go:177] * [docker-flags-391000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:16:19.145439   14362 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 11:16:19.145491   14362 notify.go:220] Checking for updates...
	I0819 11:16:19.152373   14362 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:16:19.155448   14362 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:16:19.158416   14362 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:16:19.161368   14362 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	I0819 11:16:19.164433   14362 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:16:19.167773   14362 config.go:182] Loaded profile config "force-systemd-flag-411000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:16:19.167837   14362 config.go:182] Loaded profile config "multinode-540000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:16:19.167880   14362 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:16:19.172429   14362 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:16:19.179469   14362 start.go:297] selected driver: qemu2
	I0819 11:16:19.179479   14362 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:16:19.179487   14362 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:16:19.181707   14362 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:16:19.185452   14362 out.go:177] * Automatically selected the socket_vmnet network
	I0819 11:16:19.188494   14362 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0819 11:16:19.188525   14362 cni.go:84] Creating CNI manager for ""
	I0819 11:16:19.188533   14362 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:16:19.188537   14362 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 11:16:19.188572   14362 start.go:340] cluster config:
	{Name:docker-flags-391000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-391000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:16:19.192293   14362 iso.go:125] acquiring lock: {Name:mk1182fa87ba49f1e009b3ded77c456c9e9e8e4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:16:19.199386   14362 out.go:177] * Starting "docker-flags-391000" primary control-plane node in "docker-flags-391000" cluster
	I0819 11:16:19.203466   14362 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:16:19.203483   14362 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:16:19.203495   14362 cache.go:56] Caching tarball of preloaded images
	I0819 11:16:19.203568   14362 preload.go:172] Found /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:16:19.203574   14362 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:16:19.203659   14362 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/docker-flags-391000/config.json ...
	I0819 11:16:19.203670   14362 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/docker-flags-391000/config.json: {Name:mk735465d4a8b46c5e714ed5e8e0e3115a7be54d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:16:19.203894   14362 start.go:360] acquireMachinesLock for docker-flags-391000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:16:19.203932   14362 start.go:364] duration metric: took 29.25µs to acquireMachinesLock for "docker-flags-391000"
	I0819 11:16:19.203945   14362 start.go:93] Provisioning new machine with config: &{Name:docker-flags-391000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-391000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:16:19.203976   14362 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:16:19.211434   14362 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0819 11:16:19.229864   14362 start.go:159] libmachine.API.Create for "docker-flags-391000" (driver="qemu2")
	I0819 11:16:19.229893   14362 client.go:168] LocalClient.Create starting
	I0819 11:16:19.229968   14362 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:16:19.230015   14362 main.go:141] libmachine: Decoding PEM data...
	I0819 11:16:19.230026   14362 main.go:141] libmachine: Parsing certificate...
	I0819 11:16:19.230065   14362 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:16:19.230093   14362 main.go:141] libmachine: Decoding PEM data...
	I0819 11:16:19.230100   14362 main.go:141] libmachine: Parsing certificate...
	I0819 11:16:19.230474   14362 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:16:19.385951   14362 main.go:141] libmachine: Creating SSH key...
	I0819 11:16:19.456502   14362 main.go:141] libmachine: Creating Disk image...
	I0819 11:16:19.456507   14362 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:16:19.456736   14362 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/docker-flags-391000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/docker-flags-391000/disk.qcow2
	I0819 11:16:19.466100   14362 main.go:141] libmachine: STDOUT: 
	I0819 11:16:19.466118   14362 main.go:141] libmachine: STDERR: 
	I0819 11:16:19.466171   14362 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/docker-flags-391000/disk.qcow2 +20000M
	I0819 11:16:19.474027   14362 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:16:19.474044   14362 main.go:141] libmachine: STDERR: 
	I0819 11:16:19.474063   14362 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/docker-flags-391000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/docker-flags-391000/disk.qcow2
	I0819 11:16:19.474068   14362 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:16:19.474080   14362 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:16:19.474105   14362 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/docker-flags-391000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/docker-flags-391000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/docker-flags-391000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:92:13:3d:ac:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/docker-flags-391000/disk.qcow2
	I0819 11:16:19.475690   14362 main.go:141] libmachine: STDOUT: 
	I0819 11:16:19.475705   14362 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:16:19.475723   14362 client.go:171] duration metric: took 245.825792ms to LocalClient.Create
	I0819 11:16:21.477903   14362 start.go:128] duration metric: took 2.273919458s to createHost
	I0819 11:16:21.477988   14362 start.go:83] releasing machines lock for "docker-flags-391000", held for 2.274057834s
	W0819 11:16:21.478073   14362 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:16:21.502291   14362 out.go:177] * Deleting "docker-flags-391000" in qemu2 ...
	W0819 11:16:21.525809   14362 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:16:21.525836   14362 start.go:729] Will try again in 5 seconds ...
	I0819 11:16:26.528023   14362 start.go:360] acquireMachinesLock for docker-flags-391000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:16:26.555966   14362 start.go:364] duration metric: took 27.803042ms to acquireMachinesLock for "docker-flags-391000"
	I0819 11:16:26.556105   14362 start.go:93] Provisioning new machine with config: &{Name:docker-flags-391000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-391000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:16:26.556423   14362 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:16:26.570992   14362 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0819 11:16:26.618612   14362 start.go:159] libmachine.API.Create for "docker-flags-391000" (driver="qemu2")
	I0819 11:16:26.618663   14362 client.go:168] LocalClient.Create starting
	I0819 11:16:26.618796   14362 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:16:26.618863   14362 main.go:141] libmachine: Decoding PEM data...
	I0819 11:16:26.618887   14362 main.go:141] libmachine: Parsing certificate...
	I0819 11:16:26.618949   14362 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:16:26.618996   14362 main.go:141] libmachine: Decoding PEM data...
	I0819 11:16:26.619010   14362 main.go:141] libmachine: Parsing certificate...
	I0819 11:16:26.619610   14362 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:16:26.783105   14362 main.go:141] libmachine: Creating SSH key...
	I0819 11:16:26.904317   14362 main.go:141] libmachine: Creating Disk image...
	I0819 11:16:26.904322   14362 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:16:26.904508   14362 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/docker-flags-391000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/docker-flags-391000/disk.qcow2
	I0819 11:16:26.914107   14362 main.go:141] libmachine: STDOUT: 
	I0819 11:16:26.914126   14362 main.go:141] libmachine: STDERR: 
	I0819 11:16:26.914188   14362 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/docker-flags-391000/disk.qcow2 +20000M
	I0819 11:16:26.922231   14362 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:16:26.922247   14362 main.go:141] libmachine: STDERR: 
	I0819 11:16:26.922258   14362 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/docker-flags-391000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/docker-flags-391000/disk.qcow2
	I0819 11:16:26.922264   14362 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:16:26.922274   14362 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:16:26.922311   14362 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/docker-flags-391000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/docker-flags-391000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/docker-flags-391000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:e3:14:78:19:71 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/docker-flags-391000/disk.qcow2
	I0819 11:16:26.923883   14362 main.go:141] libmachine: STDOUT: 
	I0819 11:16:26.923897   14362 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:16:26.923911   14362 client.go:171] duration metric: took 305.243ms to LocalClient.Create
	I0819 11:16:28.926150   14362 start.go:128] duration metric: took 2.369711334s to createHost
	I0819 11:16:28.926225   14362 start.go:83] releasing machines lock for "docker-flags-391000", held for 2.37023475s
	W0819 11:16:28.926589   14362 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-391000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-391000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:16:28.943383   14362 out.go:201] 
	W0819 11:16:28.951342   14362 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:16:28.951370   14362 out.go:270] * 
	* 
	W0819 11:16:28.954346   14362 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:16:28.962240   14362 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-391000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-391000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-391000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (86.119875ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-391000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-391000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-391000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-391000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-391000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-391000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-391000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-391000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-391000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (42.707084ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-391000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-391000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-391000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-391000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-391000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-391000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-08-19 11:16:29.112169 -0700 PDT m=+658.369907376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-391000 -n docker-flags-391000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-391000 -n docker-flags-391000: exit status 7 (29.255208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-391000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-391000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-391000
--- FAIL: TestDockerFlags (10.13s)

                                                
                                    
x
+
TestForceSystemdFlag (10.34s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-411000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-411000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.134697084s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-411000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-411000" primary control-plane node in "force-systemd-flag-411000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-411000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:16:13.820208   14339 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:16:13.820331   14339 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:16:13.820335   14339 out.go:358] Setting ErrFile to fd 2...
	I0819 11:16:13.820338   14339 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:16:13.820445   14339 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:16:13.821540   14339 out.go:352] Setting JSON to false
	I0819 11:16:13.837525   14339 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6340,"bootTime":1724085033,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:16:13.837604   14339 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:16:13.843716   14339 out.go:177] * [force-systemd-flag-411000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:16:13.850732   14339 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 11:16:13.850808   14339 notify.go:220] Checking for updates...
	I0819 11:16:13.858618   14339 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:16:13.862699   14339 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:16:13.864291   14339 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:16:13.867690   14339 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	I0819 11:16:13.870670   14339 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:16:13.874051   14339 config.go:182] Loaded profile config "force-systemd-env-809000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:16:13.874121   14339 config.go:182] Loaded profile config "multinode-540000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:16:13.874169   14339 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:16:13.878641   14339 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:16:13.885670   14339 start.go:297] selected driver: qemu2
	I0819 11:16:13.885678   14339 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:16:13.885684   14339 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:16:13.887807   14339 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:16:13.890683   14339 out.go:177] * Automatically selected the socket_vmnet network
	I0819 11:16:13.893777   14339 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 11:16:13.893797   14339 cni.go:84] Creating CNI manager for ""
	I0819 11:16:13.893807   14339 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:16:13.893812   14339 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 11:16:13.893845   14339 start.go:340] cluster config:
	{Name:force-systemd-flag-411000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-411000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:16:13.897430   14339 iso.go:125] acquiring lock: {Name:mk1182fa87ba49f1e009b3ded77c456c9e9e8e4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:16:13.904718   14339 out.go:177] * Starting "force-systemd-flag-411000" primary control-plane node in "force-systemd-flag-411000" cluster
	I0819 11:16:13.908703   14339 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:16:13.908718   14339 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:16:13.908727   14339 cache.go:56] Caching tarball of preloaded images
	I0819 11:16:13.908793   14339 preload.go:172] Found /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:16:13.908799   14339 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:16:13.908857   14339 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/force-systemd-flag-411000/config.json ...
	I0819 11:16:13.908870   14339 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/force-systemd-flag-411000/config.json: {Name:mke0d8b71779188a1550303b7f8a9ea4cc537561 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:16:13.909099   14339 start.go:360] acquireMachinesLock for force-systemd-flag-411000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:16:13.909147   14339 start.go:364] duration metric: took 37.417µs to acquireMachinesLock for "force-systemd-flag-411000"
	I0819 11:16:13.909162   14339 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-411000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-411000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:16:13.909190   14339 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:16:13.917678   14339 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0819 11:16:13.936062   14339 start.go:159] libmachine.API.Create for "force-systemd-flag-411000" (driver="qemu2")
	I0819 11:16:13.936091   14339 client.go:168] LocalClient.Create starting
	I0819 11:16:13.936153   14339 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:16:13.936193   14339 main.go:141] libmachine: Decoding PEM data...
	I0819 11:16:13.936203   14339 main.go:141] libmachine: Parsing certificate...
	I0819 11:16:13.936239   14339 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:16:13.936268   14339 main.go:141] libmachine: Decoding PEM data...
	I0819 11:16:13.936276   14339 main.go:141] libmachine: Parsing certificate...
	I0819 11:16:13.936657   14339 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:16:14.091710   14339 main.go:141] libmachine: Creating SSH key...
	I0819 11:16:14.175285   14339 main.go:141] libmachine: Creating Disk image...
	I0819 11:16:14.175291   14339 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:16:14.175505   14339 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/force-systemd-flag-411000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/force-systemd-flag-411000/disk.qcow2
	I0819 11:16:14.184852   14339 main.go:141] libmachine: STDOUT: 
	I0819 11:16:14.184870   14339 main.go:141] libmachine: STDERR: 
	I0819 11:16:14.184920   14339 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/force-systemd-flag-411000/disk.qcow2 +20000M
	I0819 11:16:14.192735   14339 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:16:14.192751   14339 main.go:141] libmachine: STDERR: 
	I0819 11:16:14.192772   14339 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/force-systemd-flag-411000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/force-systemd-flag-411000/disk.qcow2
	I0819 11:16:14.192777   14339 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:16:14.192789   14339 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:16:14.192813   14339 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/force-systemd-flag-411000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/force-systemd-flag-411000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/force-systemd-flag-411000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:68:b6:bb:8e:18 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/force-systemd-flag-411000/disk.qcow2
	I0819 11:16:14.194422   14339 main.go:141] libmachine: STDOUT: 
	I0819 11:16:14.194439   14339 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:16:14.194459   14339 client.go:171] duration metric: took 258.364708ms to LocalClient.Create
	I0819 11:16:16.196670   14339 start.go:128] duration metric: took 2.287474958s to createHost
	I0819 11:16:16.196733   14339 start.go:83] releasing machines lock for "force-systemd-flag-411000", held for 2.287587958s
	W0819 11:16:16.196824   14339 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:16:16.208886   14339 out.go:177] * Deleting "force-systemd-flag-411000" in qemu2 ...
	W0819 11:16:16.241698   14339 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:16:16.241727   14339 start.go:729] Will try again in 5 seconds ...
	I0819 11:16:21.243899   14339 start.go:360] acquireMachinesLock for force-systemd-flag-411000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:16:21.478096   14339 start.go:364] duration metric: took 234.11275ms to acquireMachinesLock for "force-systemd-flag-411000"
	I0819 11:16:21.478270   14339 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-411000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-411000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:16:21.478521   14339 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:16:21.492232   14339 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0819 11:16:21.543681   14339 start.go:159] libmachine.API.Create for "force-systemd-flag-411000" (driver="qemu2")
	I0819 11:16:21.543749   14339 client.go:168] LocalClient.Create starting
	I0819 11:16:21.543878   14339 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:16:21.543939   14339 main.go:141] libmachine: Decoding PEM data...
	I0819 11:16:21.543959   14339 main.go:141] libmachine: Parsing certificate...
	I0819 11:16:21.544023   14339 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:16:21.544065   14339 main.go:141] libmachine: Decoding PEM data...
	I0819 11:16:21.544081   14339 main.go:141] libmachine: Parsing certificate...
	I0819 11:16:21.544626   14339 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:16:21.721985   14339 main.go:141] libmachine: Creating SSH key...
	I0819 11:16:21.849859   14339 main.go:141] libmachine: Creating Disk image...
	I0819 11:16:21.849865   14339 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:16:21.850057   14339 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/force-systemd-flag-411000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/force-systemd-flag-411000/disk.qcow2
	I0819 11:16:21.859492   14339 main.go:141] libmachine: STDOUT: 
	I0819 11:16:21.859513   14339 main.go:141] libmachine: STDERR: 
	I0819 11:16:21.859563   14339 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/force-systemd-flag-411000/disk.qcow2 +20000M
	I0819 11:16:21.867517   14339 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:16:21.867531   14339 main.go:141] libmachine: STDERR: 
	I0819 11:16:21.867546   14339 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/force-systemd-flag-411000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/force-systemd-flag-411000/disk.qcow2
	I0819 11:16:21.867550   14339 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:16:21.867564   14339 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:16:21.867597   14339 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/force-systemd-flag-411000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/force-systemd-flag-411000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/force-systemd-flag-411000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:93:1f:21:ae:8c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/force-systemd-flag-411000/disk.qcow2
	I0819 11:16:21.869141   14339 main.go:141] libmachine: STDOUT: 
	I0819 11:16:21.869155   14339 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:16:21.869172   14339 client.go:171] duration metric: took 325.414ms to LocalClient.Create
	I0819 11:16:23.871308   14339 start.go:128] duration metric: took 2.392746292s to createHost
	I0819 11:16:23.871369   14339 start.go:83] releasing machines lock for "force-systemd-flag-411000", held for 2.393255042s
	W0819 11:16:23.871748   14339 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-411000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-411000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:16:23.890794   14339 out.go:201] 
	W0819 11:16:23.895707   14339 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:16:23.895746   14339 out.go:270] * 
	* 
	W0819 11:16:23.898375   14339 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:16:23.912704   14339 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-411000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-411000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-411000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (81.155375ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-411000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-411000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-411000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-08-19 11:16:24.011855 -0700 PDT m=+653.269567376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-411000 -n force-systemd-flag-411000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-411000 -n force-systemd-flag-411000: exit status 7 (35.867291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-411000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-411000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-411000
--- FAIL: TestForceSystemdFlag (10.34s)

                                                
                                    
x
+
TestForceSystemdEnv (10.03s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-809000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-809000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.839394625s)

                                                
                                                
-- stdout --
	* [force-systemd-env-809000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-809000" primary control-plane node in "force-systemd-env-809000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-809000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:16:09.083082   14315 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:16:09.083203   14315 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:16:09.083207   14315 out.go:358] Setting ErrFile to fd 2...
	I0819 11:16:09.083209   14315 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:16:09.083341   14315 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:16:09.084297   14315 out.go:352] Setting JSON to false
	I0819 11:16:09.100784   14315 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6336,"bootTime":1724085033,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:16:09.100871   14315 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:16:09.104931   14315 out.go:177] * [force-systemd-env-809000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:16:09.111875   14315 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 11:16:09.111907   14315 notify.go:220] Checking for updates...
	I0819 11:16:09.117498   14315 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:16:09.120876   14315 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:16:09.123907   14315 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:16:09.126915   14315 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	I0819 11:16:09.129916   14315 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0819 11:16:09.133143   14315 config.go:182] Loaded profile config "multinode-540000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:16:09.133189   14315 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:16:09.137912   14315 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:16:09.144880   14315 start.go:297] selected driver: qemu2
	I0819 11:16:09.144889   14315 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:16:09.144894   14315 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:16:09.146951   14315 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:16:09.150710   14315 out.go:177] * Automatically selected the socket_vmnet network
	I0819 11:16:09.153882   14315 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 11:16:09.153896   14315 cni.go:84] Creating CNI manager for ""
	I0819 11:16:09.153901   14315 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:16:09.153905   14315 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 11:16:09.153929   14315 start.go:340] cluster config:
	{Name:force-systemd-env-809000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-809000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:16:09.157123   14315 iso.go:125] acquiring lock: {Name:mk1182fa87ba49f1e009b3ded77c456c9e9e8e4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:16:09.162877   14315 out.go:177] * Starting "force-systemd-env-809000" primary control-plane node in "force-systemd-env-809000" cluster
	I0819 11:16:09.166854   14315 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:16:09.166867   14315 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:16:09.166874   14315 cache.go:56] Caching tarball of preloaded images
	I0819 11:16:09.166924   14315 preload.go:172] Found /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:16:09.166928   14315 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:16:09.166979   14315 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/force-systemd-env-809000/config.json ...
	I0819 11:16:09.166988   14315 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/force-systemd-env-809000/config.json: {Name:mk1b271bd0493a302c21dcb5811273d56ff6c39f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:16:09.167189   14315 start.go:360] acquireMachinesLock for force-systemd-env-809000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:16:09.167221   14315 start.go:364] duration metric: took 24.708µs to acquireMachinesLock for "force-systemd-env-809000"
	I0819 11:16:09.167235   14315 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-809000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-809000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:16:09.167266   14315 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:16:09.175895   14315 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0819 11:16:09.190942   14315 start.go:159] libmachine.API.Create for "force-systemd-env-809000" (driver="qemu2")
	I0819 11:16:09.190967   14315 client.go:168] LocalClient.Create starting
	I0819 11:16:09.191024   14315 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:16:09.191060   14315 main.go:141] libmachine: Decoding PEM data...
	I0819 11:16:09.191067   14315 main.go:141] libmachine: Parsing certificate...
	I0819 11:16:09.191105   14315 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:16:09.191128   14315 main.go:141] libmachine: Decoding PEM data...
	I0819 11:16:09.191136   14315 main.go:141] libmachine: Parsing certificate...
	I0819 11:16:09.191466   14315 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:16:09.339020   14315 main.go:141] libmachine: Creating SSH key...
	I0819 11:16:09.463827   14315 main.go:141] libmachine: Creating Disk image...
	I0819 11:16:09.463837   14315 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:16:09.464085   14315 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/force-systemd-env-809000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/force-systemd-env-809000/disk.qcow2
	I0819 11:16:09.474032   14315 main.go:141] libmachine: STDOUT: 
	I0819 11:16:09.474056   14315 main.go:141] libmachine: STDERR: 
	I0819 11:16:09.474133   14315 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/force-systemd-env-809000/disk.qcow2 +20000M
	I0819 11:16:09.482842   14315 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:16:09.482861   14315 main.go:141] libmachine: STDERR: 
	I0819 11:16:09.482875   14315 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/force-systemd-env-809000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/force-systemd-env-809000/disk.qcow2
	I0819 11:16:09.482880   14315 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:16:09.482893   14315 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:16:09.482921   14315 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/force-systemd-env-809000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/force-systemd-env-809000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/force-systemd-env-809000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:e6:bb:dc:03:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/force-systemd-env-809000/disk.qcow2
	I0819 11:16:09.484625   14315 main.go:141] libmachine: STDOUT: 
	I0819 11:16:09.484641   14315 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:16:09.484661   14315 client.go:171] duration metric: took 293.69225ms to LocalClient.Create
	I0819 11:16:11.486888   14315 start.go:128] duration metric: took 2.319603625s to createHost
	I0819 11:16:11.486946   14315 start.go:83] releasing machines lock for "force-systemd-env-809000", held for 2.319725459s
	W0819 11:16:11.487038   14315 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:16:11.497173   14315 out.go:177] * Deleting "force-systemd-env-809000" in qemu2 ...
	W0819 11:16:11.532410   14315 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:16:11.532460   14315 start.go:729] Will try again in 5 seconds ...
	I0819 11:16:16.534641   14315 start.go:360] acquireMachinesLock for force-systemd-env-809000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:16:16.535138   14315 start.go:364] duration metric: took 365.542µs to acquireMachinesLock for "force-systemd-env-809000"
	I0819 11:16:16.535278   14315 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-809000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-809000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:16:16.535707   14315 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:16:16.544174   14315 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0819 11:16:16.595203   14315 start.go:159] libmachine.API.Create for "force-systemd-env-809000" (driver="qemu2")
	I0819 11:16:16.595256   14315 client.go:168] LocalClient.Create starting
	I0819 11:16:16.595383   14315 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:16:16.595446   14315 main.go:141] libmachine: Decoding PEM data...
	I0819 11:16:16.595462   14315 main.go:141] libmachine: Parsing certificate...
	I0819 11:16:16.595534   14315 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:16:16.595579   14315 main.go:141] libmachine: Decoding PEM data...
	I0819 11:16:16.595593   14315 main.go:141] libmachine: Parsing certificate...
	I0819 11:16:16.596119   14315 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:16:16.767500   14315 main.go:141] libmachine: Creating SSH key...
	I0819 11:16:16.832038   14315 main.go:141] libmachine: Creating Disk image...
	I0819 11:16:16.832045   14315 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:16:16.832271   14315 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/force-systemd-env-809000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/force-systemd-env-809000/disk.qcow2
	I0819 11:16:16.841450   14315 main.go:141] libmachine: STDOUT: 
	I0819 11:16:16.841469   14315 main.go:141] libmachine: STDERR: 
	I0819 11:16:16.841514   14315 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/force-systemd-env-809000/disk.qcow2 +20000M
	I0819 11:16:16.849343   14315 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:16:16.849359   14315 main.go:141] libmachine: STDERR: 
	I0819 11:16:16.849368   14315 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/force-systemd-env-809000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/force-systemd-env-809000/disk.qcow2
	I0819 11:16:16.849374   14315 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:16:16.849386   14315 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:16:16.849420   14315 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/force-systemd-env-809000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/force-systemd-env-809000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/force-systemd-env-809000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:04:95:97:4b:4c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/force-systemd-env-809000/disk.qcow2
	I0819 11:16:16.851039   14315 main.go:141] libmachine: STDOUT: 
	I0819 11:16:16.851054   14315 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:16:16.851068   14315 client.go:171] duration metric: took 255.807416ms to LocalClient.Create
	I0819 11:16:18.853229   14315 start.go:128] duration metric: took 2.317506208s to createHost
	I0819 11:16:18.853281   14315 start.go:83] releasing machines lock for "force-systemd-env-809000", held for 2.318126917s
	W0819 11:16:18.853584   14315 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-809000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-809000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:16:18.863597   14315 out.go:201] 
	W0819 11:16:18.867738   14315 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:16:18.867808   14315 out.go:270] * 
	* 
	W0819 11:16:18.870380   14315 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:16:18.878710   14315 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-809000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-809000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-809000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (81.033541ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-809000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-809000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-809000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-08-19 11:16:18.976608 -0700 PDT m=+648.234293626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-809000 -n force-systemd-env-809000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-809000 -n force-systemd-env-809000: exit status 7 (35.757458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-809000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-809000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-809000
--- FAIL: TestForceSystemdEnv (10.03s)

                                                
                                    
x
+
TestErrorSpam/setup (9.77s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-240000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-240000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000 --driver=qemu2 : exit status 80 (9.763794167s)

                                                
                                                
-- stdout --
	* [nospam-240000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-240000" primary control-plane node in "nospam-240000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-240000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-240000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-240000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-240000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-240000] minikube v1.33.1 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19468
- KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-240000" primary control-plane node in "nospam-240000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-240000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-240000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.77s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.95s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-924000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-924000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.882058292s)

                                                
                                                
-- stdout --
	* [functional-924000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-924000" primary control-plane node in "functional-924000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-924000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51984 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51984 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51984 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-924000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2236: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-924000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2241: start stdout=* [functional-924000] minikube v1.33.1 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19468
- KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-924000" primary control-plane node in "functional-924000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-924000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2246: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:51984 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:51984 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:51984 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-924000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-924000 -n functional-924000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-924000 -n functional-924000: exit status 7 (68.853458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-924000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.95s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-924000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-924000 --alsologtostderr -v=8: exit status 80 (5.188662916s)

                                                
                                                
-- stdout --
	* [functional-924000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-924000" primary control-plane node in "functional-924000" cluster
	* Restarting existing qemu2 VM for "functional-924000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-924000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:06:32.541232   12583 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:06:32.541364   12583 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:06:32.541368   12583 out.go:358] Setting ErrFile to fd 2...
	I0819 11:06:32.541370   12583 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:06:32.541502   12583 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:06:32.542499   12583 out.go:352] Setting JSON to false
	I0819 11:06:32.558692   12583 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5759,"bootTime":1724085033,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:06:32.558771   12583 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:06:32.562597   12583 out.go:177] * [functional-924000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:06:32.571470   12583 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 11:06:32.571502   12583 notify.go:220] Checking for updates...
	I0819 11:06:32.577050   12583 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:06:32.581421   12583 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:06:32.584425   12583 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:06:32.585828   12583 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	I0819 11:06:32.589468   12583 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:06:32.592702   12583 config.go:182] Loaded profile config "functional-924000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:06:32.592755   12583 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:06:32.597292   12583 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 11:06:32.604443   12583 start.go:297] selected driver: qemu2
	I0819 11:06:32.604448   12583 start.go:901] validating driver "qemu2" against &{Name:functional-924000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-924000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:06:32.604502   12583 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:06:32.606785   12583 cni.go:84] Creating CNI manager for ""
	I0819 11:06:32.606801   12583 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:06:32.606850   12583 start.go:340] cluster config:
	{Name:functional-924000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-924000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:06:32.610282   12583 iso.go:125] acquiring lock: {Name:mk1182fa87ba49f1e009b3ded77c456c9e9e8e4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:06:32.618428   12583 out.go:177] * Starting "functional-924000" primary control-plane node in "functional-924000" cluster
	I0819 11:06:32.622435   12583 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:06:32.622454   12583 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:06:32.622465   12583 cache.go:56] Caching tarball of preloaded images
	I0819 11:06:32.622534   12583 preload.go:172] Found /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:06:32.622539   12583 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:06:32.622589   12583 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/functional-924000/config.json ...
	I0819 11:06:32.623066   12583 start.go:360] acquireMachinesLock for functional-924000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:06:32.623095   12583 start.go:364] duration metric: took 22.875µs to acquireMachinesLock for "functional-924000"
	I0819 11:06:32.623105   12583 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:06:32.623110   12583 fix.go:54] fixHost starting: 
	I0819 11:06:32.623228   12583 fix.go:112] recreateIfNeeded on functional-924000: state=Stopped err=<nil>
	W0819 11:06:32.623238   12583 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:06:32.631454   12583 out.go:177] * Restarting existing qemu2 VM for "functional-924000" ...
	I0819 11:06:32.635372   12583 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:06:32.635409   12583 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/functional-924000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/functional-924000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/functional-924000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:6a:2e:cc:08:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/functional-924000/disk.qcow2
	I0819 11:06:32.637450   12583 main.go:141] libmachine: STDOUT: 
	I0819 11:06:32.637470   12583 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:06:32.637502   12583 fix.go:56] duration metric: took 14.392041ms for fixHost
	I0819 11:06:32.637505   12583 start.go:83] releasing machines lock for "functional-924000", held for 14.406333ms
	W0819 11:06:32.637512   12583 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:06:32.637553   12583 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:06:32.637558   12583 start.go:729] Will try again in 5 seconds ...
	I0819 11:06:37.639739   12583 start.go:360] acquireMachinesLock for functional-924000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:06:37.640252   12583 start.go:364] duration metric: took 388.25µs to acquireMachinesLock for "functional-924000"
	I0819 11:06:37.640402   12583 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:06:37.640424   12583 fix.go:54] fixHost starting: 
	I0819 11:06:37.641163   12583 fix.go:112] recreateIfNeeded on functional-924000: state=Stopped err=<nil>
	W0819 11:06:37.641189   12583 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:06:37.645783   12583 out.go:177] * Restarting existing qemu2 VM for "functional-924000" ...
	I0819 11:06:37.653528   12583 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:06:37.653771   12583 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/functional-924000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/functional-924000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/functional-924000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:6a:2e:cc:08:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/functional-924000/disk.qcow2
	I0819 11:06:37.663189   12583 main.go:141] libmachine: STDOUT: 
	I0819 11:06:37.663262   12583 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:06:37.663366   12583 fix.go:56] duration metric: took 22.943709ms for fixHost
	I0819 11:06:37.663386   12583 start.go:83] releasing machines lock for "functional-924000", held for 23.11175ms
	W0819 11:06:37.663561   12583 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-924000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-924000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:06:37.670533   12583 out.go:201] 
	W0819 11:06:37.674683   12583 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:06:37.674711   12583 out.go:270] * 
	* 
	W0819 11:06:37.677419   12583 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:06:37.685653   12583 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:661: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-924000 --alsologtostderr -v=8": exit status 80
functional_test.go:663: soft start took 5.190337333s for "functional-924000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-924000 -n functional-924000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-924000 -n functional-924000: exit status 7 (68.132917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-924000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.26s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
functional_test.go:681: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (29.809959ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:683: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:687: expected current-context = "functional-924000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-924000 -n functional-924000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-924000 -n functional-924000: exit status 7 (31.152167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-924000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-924000 get po -A
functional_test.go:696: (dbg) Non-zero exit: kubectl --context functional-924000 get po -A: exit status 1 (26.6515ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-924000

                                                
                                                
** /stderr **
functional_test.go:698: failed to get kubectl pods: args "kubectl --context functional-924000 get po -A" : exit status 1
functional_test.go:702: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-924000\n"*: args "kubectl --context functional-924000 get po -A"
functional_test.go:705: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-924000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-924000 -n functional-924000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-924000 -n functional-924000: exit status 7 (29.911084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-924000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 ssh sudo crictl images
functional_test.go:1124: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 ssh sudo crictl images: exit status 83 (43.930625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test.go:1126: failed to get images by "out/minikube-darwin-arm64 -p functional-924000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1130: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1147: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (36.874041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test.go:1150: failed to manually delete image "out/minikube-darwin-arm64 -p functional-924000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (43.020792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1163: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (42.722959ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test.go:1165: expected "out/minikube-darwin-arm64 -p functional-924000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.77s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 kubectl -- --context functional-924000 get pods
functional_test.go:716: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 kubectl -- --context functional-924000 get pods: exit status 1 (734.596917ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-924000
	* no server found for cluster "functional-924000"

                                                
                                                
** /stderr **
functional_test.go:719: failed to get pods. args "out/minikube-darwin-arm64 -p functional-924000 kubectl -- --context functional-924000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-924000 -n functional-924000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-924000 -n functional-924000: exit status 7 (30.387333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-924000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.77s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.04s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-924000 get pods
functional_test.go:741: (dbg) Non-zero exit: out/kubectl --context functional-924000 get pods: exit status 1 (1.012396167s)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-924000
	* no server found for cluster "functional-924000"

                                                
                                                
** /stderr **
functional_test.go:744: failed to run kubectl directly. args "out/kubectl --context functional-924000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-924000 -n functional-924000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-924000 -n functional-924000: exit status 7 (29.901958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-924000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.04s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-924000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-924000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.188466125s)

                                                
                                                
-- stdout --
	* [functional-924000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-924000" primary control-plane node in "functional-924000" cluster
	* Restarting existing qemu2 VM for "functional-924000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-924000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-924000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:759: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-924000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:761: restart took 5.188943125s for "functional-924000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-924000 -n functional-924000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-924000 -n functional-924000: exit status 7 (68.978209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-924000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-924000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:810: (dbg) Non-zero exit: kubectl --context functional-924000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (29.27875ms)

                                                
                                                
** stderr ** 
	error: context "functional-924000" does not exist

                                                
                                                
** /stderr **
functional_test.go:812: failed to get components. args "kubectl --context functional-924000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-924000 -n functional-924000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-924000 -n functional-924000: exit status 7 (30.420208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-924000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 logs
functional_test.go:1236: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 logs: exit status 83 (74.665625ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-203000 | jenkins | v1.33.1 | 19 Aug 24 11:05 PDT |                     |
	|         | -p download-only-203000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 19 Aug 24 11:05 PDT | 19 Aug 24 11:05 PDT |
	| delete  | -p download-only-203000                                                  | download-only-203000 | jenkins | v1.33.1 | 19 Aug 24 11:05 PDT | 19 Aug 24 11:05 PDT |
	| start   | -o=json --download-only                                                  | download-only-843000 | jenkins | v1.33.1 | 19 Aug 24 11:05 PDT |                     |
	|         | -p download-only-843000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 19 Aug 24 11:05 PDT | 19 Aug 24 11:05 PDT |
	| delete  | -p download-only-843000                                                  | download-only-843000 | jenkins | v1.33.1 | 19 Aug 24 11:05 PDT | 19 Aug 24 11:05 PDT |
	| delete  | -p download-only-203000                                                  | download-only-203000 | jenkins | v1.33.1 | 19 Aug 24 11:05 PDT | 19 Aug 24 11:05 PDT |
	| delete  | -p download-only-843000                                                  | download-only-843000 | jenkins | v1.33.1 | 19 Aug 24 11:05 PDT | 19 Aug 24 11:05 PDT |
	| start   | --download-only -p                                                       | binary-mirror-041000 | jenkins | v1.33.1 | 19 Aug 24 11:05 PDT |                     |
	|         | binary-mirror-041000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:51949                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-041000                                                  | binary-mirror-041000 | jenkins | v1.33.1 | 19 Aug 24 11:05 PDT | 19 Aug 24 11:05 PDT |
	| addons  | enable dashboard -p                                                      | addons-110000        | jenkins | v1.33.1 | 19 Aug 24 11:05 PDT |                     |
	|         | addons-110000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-110000        | jenkins | v1.33.1 | 19 Aug 24 11:05 PDT |                     |
	|         | addons-110000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-110000 --wait=true                                             | addons-110000        | jenkins | v1.33.1 | 19 Aug 24 11:05 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-110000                                                         | addons-110000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
	| start   | -p nospam-240000 -n=1 --memory=2250 --wait=false                         | nospam-240000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-240000 --log_dir                                                  | nospam-240000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-240000 --log_dir                                                  | nospam-240000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-240000 --log_dir                                                  | nospam-240000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-240000 --log_dir                                                  | nospam-240000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-240000 --log_dir                                                  | nospam-240000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-240000 --log_dir                                                  | nospam-240000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-240000 --log_dir                                                  | nospam-240000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-240000 --log_dir                                                  | nospam-240000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-240000 --log_dir                                                  | nospam-240000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-240000 --log_dir                                                  | nospam-240000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-240000 --log_dir                                                  | nospam-240000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-240000 --log_dir                                                  | nospam-240000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-240000                                                         | nospam-240000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
	| start   | -p functional-924000                                                     | functional-924000    | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-924000                                                     | functional-924000    | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-924000 cache add                                              | functional-924000    | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-924000 cache add                                              | functional-924000    | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-924000 cache add                                              | functional-924000    | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-924000 cache add                                              | functional-924000    | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
	|         | minikube-local-cache-test:functional-924000                              |                      |         |         |                     |                     |
	| cache   | functional-924000 cache delete                                           | functional-924000    | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
	|         | minikube-local-cache-test:functional-924000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
	| ssh     | functional-924000 ssh sudo                                               | functional-924000    | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-924000                                                        | functional-924000    | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-924000 ssh                                                    | functional-924000    | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-924000 cache reload                                           | functional-924000    | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
	| ssh     | functional-924000 ssh                                                    | functional-924000    | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-924000 kubectl --                                             | functional-924000    | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
	|         | --context functional-924000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-924000                                                     | functional-924000    | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 11:06:42
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 11:06:42.920821   12674 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:06:42.920964   12674 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:06:42.920965   12674 out.go:358] Setting ErrFile to fd 2...
	I0819 11:06:42.920967   12674 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:06:42.921109   12674 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:06:42.922148   12674 out.go:352] Setting JSON to false
	I0819 11:06:42.938182   12674 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5769,"bootTime":1724085033,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:06:42.938246   12674 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:06:42.946279   12674 out.go:177] * [functional-924000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:06:42.955184   12674 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 11:06:42.955229   12674 notify.go:220] Checking for updates...
	I0819 11:06:42.964161   12674 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:06:42.967205   12674 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:06:42.970128   12674 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:06:42.973143   12674 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	I0819 11:06:42.976213   12674 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:06:42.979449   12674 config.go:182] Loaded profile config "functional-924000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:06:42.979502   12674 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:06:42.984077   12674 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 11:06:42.990049   12674 start.go:297] selected driver: qemu2
	I0819 11:06:42.990053   12674 start.go:901] validating driver "qemu2" against &{Name:functional-924000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-924000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:06:42.990140   12674 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:06:42.992612   12674 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:06:42.992650   12674 cni.go:84] Creating CNI manager for ""
	I0819 11:06:42.992656   12674 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:06:42.992698   12674 start.go:340] cluster config:
	{Name:functional-924000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-924000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:06:42.996383   12674 iso.go:125] acquiring lock: {Name:mk1182fa87ba49f1e009b3ded77c456c9e9e8e4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:06:43.004129   12674 out.go:177] * Starting "functional-924000" primary control-plane node in "functional-924000" cluster
	I0819 11:06:43.008135   12674 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:06:43.008151   12674 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:06:43.008162   12674 cache.go:56] Caching tarball of preloaded images
	I0819 11:06:43.008236   12674 preload.go:172] Found /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:06:43.008241   12674 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:06:43.008310   12674 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/functional-924000/config.json ...
	I0819 11:06:43.008805   12674 start.go:360] acquireMachinesLock for functional-924000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:06:43.008843   12674 start.go:364] duration metric: took 32.083µs to acquireMachinesLock for "functional-924000"
	I0819 11:06:43.008851   12674 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:06:43.008856   12674 fix.go:54] fixHost starting: 
	I0819 11:06:43.008988   12674 fix.go:112] recreateIfNeeded on functional-924000: state=Stopped err=<nil>
	W0819 11:06:43.008995   12674 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:06:43.016081   12674 out.go:177] * Restarting existing qemu2 VM for "functional-924000" ...
	I0819 11:06:43.020152   12674 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:06:43.020197   12674 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/functional-924000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/functional-924000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/functional-924000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:6a:2e:cc:08:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/functional-924000/disk.qcow2
	I0819 11:06:43.022339   12674 main.go:141] libmachine: STDOUT: 
	I0819 11:06:43.022361   12674 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:06:43.022390   12674 fix.go:56] duration metric: took 13.535584ms for fixHost
	I0819 11:06:43.022393   12674 start.go:83] releasing machines lock for "functional-924000", held for 13.5475ms
	W0819 11:06:43.022399   12674 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:06:43.022435   12674 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:06:43.022440   12674 start.go:729] Will try again in 5 seconds ...
	I0819 11:06:48.024685   12674 start.go:360] acquireMachinesLock for functional-924000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:06:48.025113   12674 start.go:364] duration metric: took 353.917µs to acquireMachinesLock for "functional-924000"
	I0819 11:06:48.025256   12674 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:06:48.025272   12674 fix.go:54] fixHost starting: 
	I0819 11:06:48.025993   12674 fix.go:112] recreateIfNeeded on functional-924000: state=Stopped err=<nil>
	W0819 11:06:48.026013   12674 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:06:48.031554   12674 out.go:177] * Restarting existing qemu2 VM for "functional-924000" ...
	I0819 11:06:48.035592   12674 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:06:48.035954   12674 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/functional-924000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/functional-924000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/functional-924000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:6a:2e:cc:08:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/functional-924000/disk.qcow2
	I0819 11:06:48.045101   12674 main.go:141] libmachine: STDOUT: 
	I0819 11:06:48.045144   12674 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:06:48.045223   12674 fix.go:56] duration metric: took 19.954833ms for fixHost
	I0819 11:06:48.045236   12674 start.go:83] releasing machines lock for "functional-924000", held for 20.106292ms
	W0819 11:06:48.045438   12674 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-924000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:06:48.054555   12674 out.go:201] 
	W0819 11:06:48.058540   12674 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:06:48.058606   12674 out.go:270] * 
	W0819 11:06:48.061569   12674 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:06:48.067521   12674 out.go:201] 
	
	
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test.go:1238: out/minikube-darwin-arm64 -p functional-924000 logs failed: exit status 83
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-203000 | jenkins | v1.33.1 | 19 Aug 24 11:05 PDT |                     |
|         | -p download-only-203000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 19 Aug 24 11:05 PDT | 19 Aug 24 11:05 PDT |
| delete  | -p download-only-203000                                                  | download-only-203000 | jenkins | v1.33.1 | 19 Aug 24 11:05 PDT | 19 Aug 24 11:05 PDT |
| start   | -o=json --download-only                                                  | download-only-843000 | jenkins | v1.33.1 | 19 Aug 24 11:05 PDT |                     |
|         | -p download-only-843000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 19 Aug 24 11:05 PDT | 19 Aug 24 11:05 PDT |
| delete  | -p download-only-843000                                                  | download-only-843000 | jenkins | v1.33.1 | 19 Aug 24 11:05 PDT | 19 Aug 24 11:05 PDT |
| delete  | -p download-only-203000                                                  | download-only-203000 | jenkins | v1.33.1 | 19 Aug 24 11:05 PDT | 19 Aug 24 11:05 PDT |
| delete  | -p download-only-843000                                                  | download-only-843000 | jenkins | v1.33.1 | 19 Aug 24 11:05 PDT | 19 Aug 24 11:05 PDT |
| start   | --download-only -p                                                       | binary-mirror-041000 | jenkins | v1.33.1 | 19 Aug 24 11:05 PDT |                     |
|         | binary-mirror-041000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:51949                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-041000                                                  | binary-mirror-041000 | jenkins | v1.33.1 | 19 Aug 24 11:05 PDT | 19 Aug 24 11:05 PDT |
| addons  | enable dashboard -p                                                      | addons-110000        | jenkins | v1.33.1 | 19 Aug 24 11:05 PDT |                     |
|         | addons-110000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-110000        | jenkins | v1.33.1 | 19 Aug 24 11:05 PDT |                     |
|         | addons-110000                                                            |                      |         |         |                     |                     |
| start   | -p addons-110000 --wait=true                                             | addons-110000        | jenkins | v1.33.1 | 19 Aug 24 11:05 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-110000                                                         | addons-110000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
| start   | -p nospam-240000 -n=1 --memory=2250 --wait=false                         | nospam-240000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-240000 --log_dir                                                  | nospam-240000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-240000 --log_dir                                                  | nospam-240000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-240000 --log_dir                                                  | nospam-240000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-240000 --log_dir                                                  | nospam-240000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-240000 --log_dir                                                  | nospam-240000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-240000 --log_dir                                                  | nospam-240000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-240000 --log_dir                                                  | nospam-240000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-240000 --log_dir                                                  | nospam-240000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-240000 --log_dir                                                  | nospam-240000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-240000 --log_dir                                                  | nospam-240000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-240000 --log_dir                                                  | nospam-240000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-240000 --log_dir                                                  | nospam-240000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-240000                                                         | nospam-240000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
| start   | -p functional-924000                                                     | functional-924000    | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-924000                                                     | functional-924000    | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-924000 cache add                                              | functional-924000    | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-924000 cache add                                              | functional-924000    | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-924000 cache add                                              | functional-924000    | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-924000 cache add                                              | functional-924000    | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
|         | minikube-local-cache-test:functional-924000                              |                      |         |         |                     |                     |
| cache   | functional-924000 cache delete                                           | functional-924000    | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
|         | minikube-local-cache-test:functional-924000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
| ssh     | functional-924000 ssh sudo                                               | functional-924000    | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-924000                                                        | functional-924000    | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-924000 ssh                                                    | functional-924000    | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-924000 cache reload                                           | functional-924000    | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
| ssh     | functional-924000 ssh                                                    | functional-924000    | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-924000 kubectl --                                             | functional-924000    | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
|         | --context functional-924000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-924000                                                     | functional-924000    | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/08/19 11:06:42
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.5 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0819 11:06:42.920821   12674 out.go:345] Setting OutFile to fd 1 ...
I0819 11:06:42.920964   12674 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:06:42.920965   12674 out.go:358] Setting ErrFile to fd 2...
I0819 11:06:42.920967   12674 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:06:42.921109   12674 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
I0819 11:06:42.922148   12674 out.go:352] Setting JSON to false
I0819 11:06:42.938182   12674 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5769,"bootTime":1724085033,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0819 11:06:42.938246   12674 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0819 11:06:42.946279   12674 out.go:177] * [functional-924000] minikube v1.33.1 on Darwin 14.5 (arm64)
I0819 11:06:42.955184   12674 out.go:177]   - MINIKUBE_LOCATION=19468
I0819 11:06:42.955229   12674 notify.go:220] Checking for updates...
I0819 11:06:42.964161   12674 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
I0819 11:06:42.967205   12674 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0819 11:06:42.970128   12674 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0819 11:06:42.973143   12674 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
I0819 11:06:42.976213   12674 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0819 11:06:42.979449   12674 config.go:182] Loaded profile config "functional-924000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 11:06:42.979502   12674 driver.go:392] Setting default libvirt URI to qemu:///system
I0819 11:06:42.984077   12674 out.go:177] * Using the qemu2 driver based on existing profile
I0819 11:06:42.990049   12674 start.go:297] selected driver: qemu2
I0819 11:06:42.990053   12674 start.go:901] validating driver "qemu2" against &{Name:functional-924000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:functional-924000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0819 11:06:42.990140   12674 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0819 11:06:42.992612   12674 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0819 11:06:42.992650   12674 cni.go:84] Creating CNI manager for ""
I0819 11:06:42.992656   12674 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0819 11:06:42.992698   12674 start.go:340] cluster config:
{Name:functional-924000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-924000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0819 11:06:42.996383   12674 iso.go:125] acquiring lock: {Name:mk1182fa87ba49f1e009b3ded77c456c9e9e8e4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0819 11:06:43.004129   12674 out.go:177] * Starting "functional-924000" primary control-plane node in "functional-924000" cluster
I0819 11:06:43.008135   12674 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
I0819 11:06:43.008151   12674 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
I0819 11:06:43.008162   12674 cache.go:56] Caching tarball of preloaded images
I0819 11:06:43.008236   12674 preload.go:172] Found /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0819 11:06:43.008241   12674 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
I0819 11:06:43.008310   12674 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/functional-924000/config.json ...
I0819 11:06:43.008805   12674 start.go:360] acquireMachinesLock for functional-924000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0819 11:06:43.008843   12674 start.go:364] duration metric: took 32.083µs to acquireMachinesLock for "functional-924000"
I0819 11:06:43.008851   12674 start.go:96] Skipping create...Using existing machine configuration
I0819 11:06:43.008856   12674 fix.go:54] fixHost starting: 
I0819 11:06:43.008988   12674 fix.go:112] recreateIfNeeded on functional-924000: state=Stopped err=<nil>
W0819 11:06:43.008995   12674 fix.go:138] unexpected machine state, will restart: <nil>
I0819 11:06:43.016081   12674 out.go:177] * Restarting existing qemu2 VM for "functional-924000" ...
I0819 11:06:43.020152   12674 qemu.go:418] Using hvf for hardware acceleration
I0819 11:06:43.020197   12674 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/functional-924000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/functional-924000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/functional-924000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:6a:2e:cc:08:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/functional-924000/disk.qcow2
I0819 11:06:43.022339   12674 main.go:141] libmachine: STDOUT: 
I0819 11:06:43.022361   12674 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0819 11:06:43.022390   12674 fix.go:56] duration metric: took 13.535584ms for fixHost
I0819 11:06:43.022393   12674 start.go:83] releasing machines lock for "functional-924000", held for 13.5475ms
W0819 11:06:43.022399   12674 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0819 11:06:43.022435   12674 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0819 11:06:43.022440   12674 start.go:729] Will try again in 5 seconds ...
I0819 11:06:48.024685   12674 start.go:360] acquireMachinesLock for functional-924000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0819 11:06:48.025113   12674 start.go:364] duration metric: took 353.917µs to acquireMachinesLock for "functional-924000"
I0819 11:06:48.025256   12674 start.go:96] Skipping create...Using existing machine configuration
I0819 11:06:48.025272   12674 fix.go:54] fixHost starting: 
I0819 11:06:48.025993   12674 fix.go:112] recreateIfNeeded on functional-924000: state=Stopped err=<nil>
W0819 11:06:48.026013   12674 fix.go:138] unexpected machine state, will restart: <nil>
I0819 11:06:48.031554   12674 out.go:177] * Restarting existing qemu2 VM for "functional-924000" ...
I0819 11:06:48.035592   12674 qemu.go:418] Using hvf for hardware acceleration
I0819 11:06:48.035954   12674 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/functional-924000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/functional-924000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/functional-924000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:6a:2e:cc:08:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/functional-924000/disk.qcow2
I0819 11:06:48.045101   12674 main.go:141] libmachine: STDOUT: 
I0819 11:06:48.045144   12674 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0819 11:06:48.045223   12674 fix.go:56] duration metric: took 19.954833ms for fixHost
I0819 11:06:48.045236   12674 start.go:83] releasing machines lock for "functional-924000", held for 20.106292ms
W0819 11:06:48.045438   12674 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-924000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0819 11:06:48.054555   12674 out.go:201] 
W0819 11:06:48.058540   12674 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0819 11:06:48.058606   12674 out.go:270] * 
W0819 11:06:48.061569   12674 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0819 11:06:48.067521   12674 out.go:201] 

                                                
                                                

                                                
                                                
* The control-plane node functional-924000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-924000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd758745177/001/logs.txt
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-203000 | jenkins | v1.33.1 | 19 Aug 24 11:05 PDT |                     |
|         | -p download-only-203000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 19 Aug 24 11:05 PDT | 19 Aug 24 11:05 PDT |
| delete  | -p download-only-203000                                                  | download-only-203000 | jenkins | v1.33.1 | 19 Aug 24 11:05 PDT | 19 Aug 24 11:05 PDT |
| start   | -o=json --download-only                                                  | download-only-843000 | jenkins | v1.33.1 | 19 Aug 24 11:05 PDT |                     |
|         | -p download-only-843000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 19 Aug 24 11:05 PDT | 19 Aug 24 11:05 PDT |
| delete  | -p download-only-843000                                                  | download-only-843000 | jenkins | v1.33.1 | 19 Aug 24 11:05 PDT | 19 Aug 24 11:05 PDT |
| delete  | -p download-only-203000                                                  | download-only-203000 | jenkins | v1.33.1 | 19 Aug 24 11:05 PDT | 19 Aug 24 11:05 PDT |
| delete  | -p download-only-843000                                                  | download-only-843000 | jenkins | v1.33.1 | 19 Aug 24 11:05 PDT | 19 Aug 24 11:05 PDT |
| start   | --download-only -p                                                       | binary-mirror-041000 | jenkins | v1.33.1 | 19 Aug 24 11:05 PDT |                     |
|         | binary-mirror-041000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:51949                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-041000                                                  | binary-mirror-041000 | jenkins | v1.33.1 | 19 Aug 24 11:05 PDT | 19 Aug 24 11:05 PDT |
| addons  | enable dashboard -p                                                      | addons-110000        | jenkins | v1.33.1 | 19 Aug 24 11:05 PDT |                     |
|         | addons-110000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-110000        | jenkins | v1.33.1 | 19 Aug 24 11:05 PDT |                     |
|         | addons-110000                                                            |                      |         |         |                     |                     |
| start   | -p addons-110000 --wait=true                                             | addons-110000        | jenkins | v1.33.1 | 19 Aug 24 11:05 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-110000                                                         | addons-110000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
| start   | -p nospam-240000 -n=1 --memory=2250 --wait=false                         | nospam-240000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-240000 --log_dir                                                  | nospam-240000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-240000 --log_dir                                                  | nospam-240000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-240000 --log_dir                                                  | nospam-240000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-240000 --log_dir                                                  | nospam-240000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-240000 --log_dir                                                  | nospam-240000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-240000 --log_dir                                                  | nospam-240000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-240000 --log_dir                                                  | nospam-240000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-240000 --log_dir                                                  | nospam-240000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-240000 --log_dir                                                  | nospam-240000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-240000 --log_dir                                                  | nospam-240000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-240000 --log_dir                                                  | nospam-240000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-240000 --log_dir                                                  | nospam-240000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-240000                                                         | nospam-240000        | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
| start   | -p functional-924000                                                     | functional-924000    | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-924000                                                     | functional-924000    | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-924000 cache add                                              | functional-924000    | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-924000 cache add                                              | functional-924000    | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-924000 cache add                                              | functional-924000    | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-924000 cache add                                              | functional-924000    | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
|         | minikube-local-cache-test:functional-924000                              |                      |         |         |                     |                     |
| cache   | functional-924000 cache delete                                           | functional-924000    | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
|         | minikube-local-cache-test:functional-924000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
| ssh     | functional-924000 ssh sudo                                               | functional-924000    | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-924000                                                        | functional-924000    | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-924000 ssh                                                    | functional-924000    | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-924000 cache reload                                           | functional-924000    | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
| ssh     | functional-924000 ssh                                                    | functional-924000    | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT | 19 Aug 24 11:06 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-924000 kubectl --                                             | functional-924000    | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
|         | --context functional-924000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-924000                                                     | functional-924000    | jenkins | v1.33.1 | 19 Aug 24 11:06 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/08/19 11:06:42
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.5 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0819 11:06:42.920821   12674 out.go:345] Setting OutFile to fd 1 ...
I0819 11:06:42.920964   12674 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:06:42.920965   12674 out.go:358] Setting ErrFile to fd 2...
I0819 11:06:42.920967   12674 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:06:42.921109   12674 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
I0819 11:06:42.922148   12674 out.go:352] Setting JSON to false
I0819 11:06:42.938182   12674 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5769,"bootTime":1724085033,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0819 11:06:42.938246   12674 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0819 11:06:42.946279   12674 out.go:177] * [functional-924000] minikube v1.33.1 on Darwin 14.5 (arm64)
I0819 11:06:42.955184   12674 out.go:177]   - MINIKUBE_LOCATION=19468
I0819 11:06:42.955229   12674 notify.go:220] Checking for updates...
I0819 11:06:42.964161   12674 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
I0819 11:06:42.967205   12674 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0819 11:06:42.970128   12674 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0819 11:06:42.973143   12674 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
I0819 11:06:42.976213   12674 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0819 11:06:42.979449   12674 config.go:182] Loaded profile config "functional-924000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 11:06:42.979502   12674 driver.go:392] Setting default libvirt URI to qemu:///system
I0819 11:06:42.984077   12674 out.go:177] * Using the qemu2 driver based on existing profile
I0819 11:06:42.990049   12674 start.go:297] selected driver: qemu2
I0819 11:06:42.990053   12674 start.go:901] validating driver "qemu2" against &{Name:functional-924000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:functional-924000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0819 11:06:42.990140   12674 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0819 11:06:42.992612   12674 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0819 11:06:42.992650   12674 cni.go:84] Creating CNI manager for ""
I0819 11:06:42.992656   12674 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0819 11:06:42.992698   12674 start.go:340] cluster config:
{Name:functional-924000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-924000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0819 11:06:42.996383   12674 iso.go:125] acquiring lock: {Name:mk1182fa87ba49f1e009b3ded77c456c9e9e8e4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0819 11:06:43.004129   12674 out.go:177] * Starting "functional-924000" primary control-plane node in "functional-924000" cluster
I0819 11:06:43.008135   12674 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
I0819 11:06:43.008151   12674 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
I0819 11:06:43.008162   12674 cache.go:56] Caching tarball of preloaded images
I0819 11:06:43.008236   12674 preload.go:172] Found /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0819 11:06:43.008241   12674 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
I0819 11:06:43.008310   12674 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/functional-924000/config.json ...
I0819 11:06:43.008805   12674 start.go:360] acquireMachinesLock for functional-924000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0819 11:06:43.008843   12674 start.go:364] duration metric: took 32.083µs to acquireMachinesLock for "functional-924000"
I0819 11:06:43.008851   12674 start.go:96] Skipping create...Using existing machine configuration
I0819 11:06:43.008856   12674 fix.go:54] fixHost starting: 
I0819 11:06:43.008988   12674 fix.go:112] recreateIfNeeded on functional-924000: state=Stopped err=<nil>
W0819 11:06:43.008995   12674 fix.go:138] unexpected machine state, will restart: <nil>
I0819 11:06:43.016081   12674 out.go:177] * Restarting existing qemu2 VM for "functional-924000" ...
I0819 11:06:43.020152   12674 qemu.go:418] Using hvf for hardware acceleration
I0819 11:06:43.020197   12674 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/functional-924000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/functional-924000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/functional-924000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:6a:2e:cc:08:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/functional-924000/disk.qcow2
I0819 11:06:43.022339   12674 main.go:141] libmachine: STDOUT: 
I0819 11:06:43.022361   12674 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0819 11:06:43.022390   12674 fix.go:56] duration metric: took 13.535584ms for fixHost
I0819 11:06:43.022393   12674 start.go:83] releasing machines lock for "functional-924000", held for 13.5475ms
W0819 11:06:43.022399   12674 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0819 11:06:43.022435   12674 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0819 11:06:43.022440   12674 start.go:729] Will try again in 5 seconds ...
I0819 11:06:48.024685   12674 start.go:360] acquireMachinesLock for functional-924000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0819 11:06:48.025113   12674 start.go:364] duration metric: took 353.917µs to acquireMachinesLock for "functional-924000"
I0819 11:06:48.025256   12674 start.go:96] Skipping create...Using existing machine configuration
I0819 11:06:48.025272   12674 fix.go:54] fixHost starting: 
I0819 11:06:48.025993   12674 fix.go:112] recreateIfNeeded on functional-924000: state=Stopped err=<nil>
W0819 11:06:48.026013   12674 fix.go:138] unexpected machine state, will restart: <nil>
I0819 11:06:48.031554   12674 out.go:177] * Restarting existing qemu2 VM for "functional-924000" ...
I0819 11:06:48.035592   12674 qemu.go:418] Using hvf for hardware acceleration
I0819 11:06:48.035954   12674 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/functional-924000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/functional-924000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/functional-924000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:6a:2e:cc:08:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/functional-924000/disk.qcow2
I0819 11:06:48.045101   12674 main.go:141] libmachine: STDOUT: 
I0819 11:06:48.045144   12674 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0819 11:06:48.045223   12674 fix.go:56] duration metric: took 19.954833ms for fixHost
I0819 11:06:48.045236   12674 start.go:83] releasing machines lock for "functional-924000", held for 20.106292ms
W0819 11:06:48.045438   12674 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-924000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0819 11:06:48.054555   12674 out.go:201] 
W0819 11:06:48.058540   12674 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0819 11:06:48.058606   12674 out.go:270] * 
W0819 11:06:48.061569   12674 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0819 11:06:48.067521   12674 out.go:201] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-924000 apply -f testdata/invalidsvc.yaml
functional_test.go:2321: (dbg) Non-zero exit: kubectl --context functional-924000 apply -f testdata/invalidsvc.yaml: exit status 1 (27.117208ms)

                                                
                                                
** stderr ** 
	error: context "functional-924000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2323: kubectl --context functional-924000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-924000 --alsologtostderr -v=1]
functional_test.go:918: output didn't produce a URL
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-924000 --alsologtostderr -v=1] ...
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-924000 --alsologtostderr -v=1] stdout:
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-924000 --alsologtostderr -v=1] stderr:
I0819 11:07:26.234666   12905 out.go:345] Setting OutFile to fd 1 ...
I0819 11:07:26.234993   12905 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:07:26.234996   12905 out.go:358] Setting ErrFile to fd 2...
I0819 11:07:26.234999   12905 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:07:26.235121   12905 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
I0819 11:07:26.235367   12905 mustload.go:65] Loading cluster: functional-924000
I0819 11:07:26.235556   12905 config.go:182] Loaded profile config "functional-924000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 11:07:26.239945   12905 out.go:177] * The control-plane node functional-924000 host is not running: state=Stopped
I0819 11:07:26.247739   12905 out.go:177]   To start a cluster, run: "minikube start -p functional-924000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-924000 -n functional-924000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-924000 -n functional-924000: exit status 7 (42.145584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-924000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 status
functional_test.go:854: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 status: exit status 7 (72.096084ms)

                                                
                                                
-- stdout --
	functional-924000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:856: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-924000 status" : exit status 7
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:860: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (33.722542ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:862: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-924000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 status -o json
functional_test.go:872: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 status -o json: exit status 7 (29.88975ms)

                                                
                                                
-- stdout --
	{"Name":"functional-924000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:874: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-924000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-924000 -n functional-924000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-924000 -n functional-924000: exit status 7 (29.999459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-924000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-924000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1627: (dbg) Non-zero exit: kubectl --context functional-924000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.943416ms)

                                                
                                                
** stderr ** 
	error: context "functional-924000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1633: failed to create hello-node deployment with this command "kubectl --context functional-924000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-924000 describe po hello-node-connect
functional_test.go:1602: (dbg) Non-zero exit: kubectl --context functional-924000 describe po hello-node-connect: exit status 1 (26.002167ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-924000

                                                
                                                
** /stderr **
functional_test.go:1604: "kubectl --context functional-924000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1606: hello-node pod describe:
functional_test.go:1608: (dbg) Run:  kubectl --context functional-924000 logs -l app=hello-node-connect
functional_test.go:1608: (dbg) Non-zero exit: kubectl --context functional-924000 logs -l app=hello-node-connect: exit status 1 (26.486833ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-924000

                                                
                                                
** /stderr **
functional_test.go:1610: "kubectl --context functional-924000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1612: hello-node logs:
functional_test.go:1614: (dbg) Run:  kubectl --context functional-924000 describe svc hello-node-connect
functional_test.go:1614: (dbg) Non-zero exit: kubectl --context functional-924000 describe svc hello-node-connect: exit status 1 (26.362458ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-924000

                                                
                                                
** /stderr **
functional_test.go:1616: "kubectl --context functional-924000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1618: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-924000 -n functional-924000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-924000 -n functional-924000: exit status 7 (31.261791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-924000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-924000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-924000 -n functional-924000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-924000 -n functional-924000: exit status 7 (30.137167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-924000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 ssh "echo hello"
functional_test.go:1725: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 ssh "echo hello": exit status 83 (43.600125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test.go:1730: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-924000 ssh \"echo hello\"" : exit status 83
functional_test.go:1734: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-924000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-924000\"\n"*. args "out/minikube-darwin-arm64 -p functional-924000 ssh \"echo hello\""
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 ssh "cat /etc/hostname"
functional_test.go:1742: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 ssh "cat /etc/hostname": exit status 83 (37.991458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test.go:1748: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-924000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1752: expected minikube ssh command output to be -"functional-924000"- but got *"* The control-plane node functional-924000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-924000\"\n"*. args "out/minikube-darwin-arm64 -p functional-924000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-924000 -n functional-924000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-924000 -n functional-924000: exit status 7 (39.191834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-924000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (53.532333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-924000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 ssh -n functional-924000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 ssh -n functional-924000 "sudo cat /home/docker/cp-test.txt": exit status 83 (39.949875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-924000 ssh -n functional-924000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-924000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-924000\"\n",
  }, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 cp functional-924000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2773368436/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 cp functional-924000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2773368436/001/cp-test.txt: exit status 83 (41.201042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-924000 cp functional-924000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2773368436/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 ssh -n functional-924000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 ssh -n functional-924000 "sudo cat /home/docker/cp-test.txt": exit status 83 (41.913584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-924000 ssh -n functional-924000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2773368436/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"* The control-plane node functional-924000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-924000\"\n",
+ 	"",
  )
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (47.75275ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-924000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 ssh -n functional-924000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 ssh -n functional-924000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (52.8215ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-924000 ssh -n functional-924000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-924000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-924000\"\n",
  }, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/12317/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 ssh "sudo cat /etc/test/nested/copy/12317/hosts"
functional_test.go:1931: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 ssh "sudo cat /etc/test/nested/copy/12317/hosts": exit status 83 (41.428834ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test.go:1933: out/minikube-darwin-arm64 -p functional-924000 ssh "sudo cat /etc/test/nested/copy/12317/hosts" failed: exit status 83
functional_test.go:1936: file sync test content: * The control-plane node functional-924000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-924000"
functional_test.go:1946: /etc/sync.test content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-924000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-924000\"\n",
  }, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-924000 -n functional-924000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-924000 -n functional-924000: exit status 7 (29.997125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-924000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/12317.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 ssh "sudo cat /etc/ssl/certs/12317.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 ssh "sudo cat /etc/ssl/certs/12317.pem": exit status 83 (44.645083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/12317.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-924000 ssh \"sudo cat /etc/ssl/certs/12317.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/12317.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-924000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-924000"
  	"""
  )
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/12317.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 ssh "sudo cat /usr/share/ca-certificates/12317.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 ssh "sudo cat /usr/share/ca-certificates/12317.pem": exit status 83 (41.587583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/usr/share/ca-certificates/12317.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-924000 ssh \"sudo cat /usr/share/ca-certificates/12317.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/12317.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-924000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-924000"
  	"""
  )
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (40.639042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-924000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-924000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-924000"
  	"""
  )
functional_test.go:1999: Checking for existence of /etc/ssl/certs/123172.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 ssh "sudo cat /etc/ssl/certs/123172.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 ssh "sudo cat /etc/ssl/certs/123172.pem": exit status 83 (39.75475ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/123172.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-924000 ssh \"sudo cat /etc/ssl/certs/123172.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/123172.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-924000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-924000"
  	"""
  )
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/123172.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 ssh "sudo cat /usr/share/ca-certificates/123172.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 ssh "sudo cat /usr/share/ca-certificates/123172.pem": exit status 83 (39.023834ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/usr/share/ca-certificates/123172.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-924000 ssh \"sudo cat /usr/share/ca-certificates/123172.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/123172.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-924000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-924000"
  	"""
  )
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (40.443541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-924000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-924000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-924000"
  	"""
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-924000 -n functional-924000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-924000 -n functional-924000: exit status 7 (29.311916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-924000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-924000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:219: (dbg) Non-zero exit: kubectl --context functional-924000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (26.324583ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-924000

                                                
                                                
** /stderr **
functional_test.go:221: failed to 'kubectl get nodes' with args "kubectl --context functional-924000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:227: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-924000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-924000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-924000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-924000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-924000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-924000 -n functional-924000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-924000 -n functional-924000: exit status 7 (30.884917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-924000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 ssh "sudo systemctl is-active crio": exit status 83 (40.484625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test.go:2030: output of 
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2033: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-924000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-924000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-924000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-924000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0819 11:06:48.711206   12728 out.go:345] Setting OutFile to fd 1 ...
I0819 11:06:48.711317   12728 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:06:48.711320   12728 out.go:358] Setting ErrFile to fd 2...
I0819 11:06:48.711322   12728 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:06:48.711479   12728 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
I0819 11:06:48.711685   12728 mustload.go:65] Loading cluster: functional-924000
I0819 11:06:48.711914   12728 config.go:182] Loaded profile config "functional-924000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 11:06:48.715750   12728 out.go:177] * The control-plane node functional-924000 host is not running: state=Stopped
I0819 11:06:48.723903   12728 out.go:177]   To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
stdout: * The control-plane node functional-924000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-924000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-924000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 12729: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-924000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-924000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-924000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-924000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-924000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-924000": client config: context "functional-924000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (91.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-924000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-924000 get svc nginx-svc: exit status 1 (69.857375ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-924000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-924000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (91.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-924000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1437: (dbg) Non-zero exit: kubectl --context functional-924000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.1875ms)

                                                
                                                
** stderr ** 
	error: context "functional-924000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1443: failed to create hello-node deployment with this command "kubectl --context functional-924000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 service list
functional_test.go:1459: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 service list: exit status 83 (42.11125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test.go:1461: failed to do service list. args "out/minikube-darwin-arm64 -p functional-924000 service list" : exit status 83
functional_test.go:1464: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-924000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-924000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 service list -o json
functional_test.go:1489: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 service list -o json: exit status 83 (41.770417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test.go:1491: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-924000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 service --namespace=default --https --url hello-node
functional_test.go:1509: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 service --namespace=default --https --url hello-node: exit status 83 (40.881292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test.go:1511: failed to get service url. args "out/minikube-darwin-arm64 -p functional-924000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 service hello-node --url --format={{.IP}}
functional_test.go:1540: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 service hello-node --url --format={{.IP}}: exit status 83 (43.984708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test.go:1542: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-924000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1548: "* The control-plane node functional-924000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-924000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 service hello-node --url
functional_test.go:1559: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 service hello-node --url: exit status 83 (50.786875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test.go:1561: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-924000 service hello-node --url": exit status 83
functional_test.go:1565: found endpoint for hello-node: * The control-plane node functional-924000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-924000"
functional_test.go:1569: failed to parse "* The control-plane node functional-924000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-924000\"": parse "* The control-plane node functional-924000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-924000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 version -o=json --components
functional_test.go:2270: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 version -o=json --components: exit status 83 (49.729208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test.go:2272: error version: exit status 83
functional_test.go:2277: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-924000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-924000"
functional_test.go:2277: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-924000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-924000"
functional_test.go:2277: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-924000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-924000"
functional_test.go:2277: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-924000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-924000"
functional_test.go:2277: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-924000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-924000"
functional_test.go:2277: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-924000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-924000"
functional_test.go:2277: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-924000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-924000"
functional_test.go:2277: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-924000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-924000"
functional_test.go:2277: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-924000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-924000"
functional_test.go:2277: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-924000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-924000"
--- FAIL: TestFunctional/parallel/Version/components (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-924000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-924000 image ls --format short --alsologtostderr:
I0819 11:07:31.125870   13040 out.go:345] Setting OutFile to fd 1 ...
I0819 11:07:31.126003   13040 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:07:31.126007   13040 out.go:358] Setting ErrFile to fd 2...
I0819 11:07:31.126009   13040 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:07:31.126128   13040 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
I0819 11:07:31.126519   13040 config.go:182] Loaded profile config "functional-924000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 11:07:31.126592   13040 config.go:182] Loaded profile config "functional-924000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
functional_test.go:275: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-924000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-924000 image ls --format table --alsologtostderr:
I0819 11:07:31.194915   13044 out.go:345] Setting OutFile to fd 1 ...
I0819 11:07:31.195057   13044 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:07:31.195061   13044 out.go:358] Setting ErrFile to fd 2...
I0819 11:07:31.195063   13044 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:07:31.195198   13044 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
I0819 11:07:31.195604   13044 config.go:182] Loaded profile config "functional-924000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 11:07:31.195666   13044 config.go:182] Loaded profile config "functional-924000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
functional_test.go:275: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-924000 image ls --format json --alsologtostderr:
[]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-924000 image ls --format json --alsologtostderr:
I0819 11:07:31.161163   13042 out.go:345] Setting OutFile to fd 1 ...
I0819 11:07:31.161300   13042 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:07:31.161303   13042 out.go:358] Setting ErrFile to fd 2...
I0819 11:07:31.161306   13042 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:07:31.161424   13042 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
I0819 11:07:31.161822   13042 config.go:182] Loaded profile config "functional-924000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 11:07:31.161881   13042 config.go:182] Loaded profile config "functional-924000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
functional_test.go:275: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-924000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-924000 image ls --format yaml --alsologtostderr:
I0819 11:07:31.091296   13038 out.go:345] Setting OutFile to fd 1 ...
I0819 11:07:31.091450   13038 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:07:31.091453   13038 out.go:358] Setting ErrFile to fd 2...
I0819 11:07:31.091456   13038 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:07:31.091587   13038 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
I0819 11:07:31.092004   13038 config.go:182] Loaded profile config "functional-924000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 11:07:31.092067   13038 config.go:182] Loaded profile config "functional-924000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
functional_test.go:275: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 ssh pgrep buildkitd: exit status 83 (40.145584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 image build -t localhost/my-image:functional-924000 testdata/build --alsologtostderr
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-924000 image build -t localhost/my-image:functional-924000 testdata/build --alsologtostderr:
I0819 11:07:31.270001   13048 out.go:345] Setting OutFile to fd 1 ...
I0819 11:07:31.270788   13048 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:07:31.270792   13048 out.go:358] Setting ErrFile to fd 2...
I0819 11:07:31.270795   13048 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:07:31.270920   13048 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
I0819 11:07:31.271352   13048 config.go:182] Loaded profile config "functional-924000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 11:07:31.271773   13048 config.go:182] Loaded profile config "functional-924000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 11:07:31.272002   13048 build_images.go:133] succeeded building to: 
I0819 11:07:31.272005   13048 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 image ls
functional_test.go:446: expected "localhost/my-image:functional-924000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 image load --daemon kicbase/echo-server:functional-924000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-924000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 image load --daemon kicbase/echo-server:functional-924000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-924000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-924000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 image load --daemon kicbase/echo-server:functional-924000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-924000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 image save kicbase/echo-server:functional-924000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:386: expected "/Users/jenkins/workspace/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-924000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-924000 docker-env) && out/minikube-darwin-arm64 status -p functional-924000"
functional_test.go:499: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-924000 docker-env) && out/minikube-darwin-arm64 status -p functional-924000": exit status 1 (43.416625ms)
functional_test.go:505: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 update-context --alsologtostderr -v=2: exit status 83 (43.749958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:07:31.339988   13052 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:07:31.340743   13052 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:07:31.340747   13052 out.go:358] Setting ErrFile to fd 2...
	I0819 11:07:31.340749   13052 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:07:31.340871   13052 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:07:31.341065   13052 mustload.go:65] Loading cluster: functional-924000
	I0819 11:07:31.341245   13052 config.go:182] Loaded profile config "functional-924000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:07:31.346415   13052 out.go:177] * The control-plane node functional-924000 host is not running: state=Stopped
	I0819 11:07:31.350448   13052 out.go:177]   To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-924000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-924000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-924000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 update-context --alsologtostderr -v=2: exit status 83 (42.698625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:07:31.431100   13056 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:07:31.431217   13056 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:07:31.431220   13056 out.go:358] Setting ErrFile to fd 2...
	I0819 11:07:31.431222   13056 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:07:31.431340   13056 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:07:31.431534   13056 mustload.go:65] Loading cluster: functional-924000
	I0819 11:07:31.431722   13056 config.go:182] Loaded profile config "functional-924000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:07:31.436486   13056 out.go:177] * The control-plane node functional-924000 host is not running: state=Stopped
	I0819 11:07:31.440415   13056 out.go:177]   To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-924000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-924000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-924000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 update-context --alsologtostderr -v=2: exit status 83 (45.513042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:07:31.384835   13054 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:07:31.384969   13054 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:07:31.384973   13054 out.go:358] Setting ErrFile to fd 2...
	I0819 11:07:31.384975   13054 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:07:31.385088   13054 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:07:31.385292   13054 mustload.go:65] Loading cluster: functional-924000
	I0819 11:07:31.385477   13054 config.go:182] Loaded profile config "functional-924000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:07:31.390474   13054 out.go:177] * The control-plane node functional-924000 host is not running: state=Stopped
	I0819 11:07:31.398428   13054 out.go:177]   To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-924000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-924000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-924000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.029766834s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 14 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (37.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (37.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (9.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-046000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-046000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.854436959s)

                                                
                                                
-- stdout --
	* [ha-046000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-046000" primary control-plane node in "ha-046000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-046000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:09:23.156450   13169 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:09:23.156601   13169 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:09:23.156604   13169 out.go:358] Setting ErrFile to fd 2...
	I0819 11:09:23.156607   13169 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:09:23.156742   13169 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:09:23.157832   13169 out.go:352] Setting JSON to false
	I0819 11:09:23.173922   13169 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5930,"bootTime":1724085033,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:09:23.174004   13169 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:09:23.181248   13169 out.go:177] * [ha-046000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:09:23.189154   13169 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 11:09:23.189196   13169 notify.go:220] Checking for updates...
	I0819 11:09:23.196200   13169 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:09:23.199167   13169 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:09:23.202187   13169 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:09:23.205179   13169 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	I0819 11:09:23.208120   13169 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:09:23.211351   13169 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:09:23.215336   13169 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:09:23.222173   13169 start.go:297] selected driver: qemu2
	I0819 11:09:23.222180   13169 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:09:23.222187   13169 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:09:23.224496   13169 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:09:23.227222   13169 out.go:177] * Automatically selected the socket_vmnet network
	I0819 11:09:23.230259   13169 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:09:23.230291   13169 cni.go:84] Creating CNI manager for ""
	I0819 11:09:23.230295   13169 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0819 11:09:23.230299   13169 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 11:09:23.230344   13169 start.go:340] cluster config:
	{Name:ha-046000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-046000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:09:23.233822   13169 iso.go:125] acquiring lock: {Name:mk1182fa87ba49f1e009b3ded77c456c9e9e8e4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:09:23.241208   13169 out.go:177] * Starting "ha-046000" primary control-plane node in "ha-046000" cluster
	I0819 11:09:23.245163   13169 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:09:23.245178   13169 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:09:23.245190   13169 cache.go:56] Caching tarball of preloaded images
	I0819 11:09:23.245256   13169 preload.go:172] Found /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:09:23.245262   13169 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:09:23.245455   13169 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/ha-046000/config.json ...
	I0819 11:09:23.245466   13169 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/ha-046000/config.json: {Name:mk94ee602aaea3584152c3e1e713daeda98faccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:09:23.245794   13169 start.go:360] acquireMachinesLock for ha-046000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:09:23.245826   13169 start.go:364] duration metric: took 26.583µs to acquireMachinesLock for "ha-046000"
	I0819 11:09:23.245837   13169 start.go:93] Provisioning new machine with config: &{Name:ha-046000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.0 ClusterName:ha-046000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:09:23.245866   13169 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:09:23.250014   13169 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 11:09:23.266564   13169 start.go:159] libmachine.API.Create for "ha-046000" (driver="qemu2")
	I0819 11:09:23.266589   13169 client.go:168] LocalClient.Create starting
	I0819 11:09:23.266645   13169 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:09:23.266679   13169 main.go:141] libmachine: Decoding PEM data...
	I0819 11:09:23.266688   13169 main.go:141] libmachine: Parsing certificate...
	I0819 11:09:23.266723   13169 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:09:23.266745   13169 main.go:141] libmachine: Decoding PEM data...
	I0819 11:09:23.266753   13169 main.go:141] libmachine: Parsing certificate...
	I0819 11:09:23.267194   13169 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:09:23.423841   13169 main.go:141] libmachine: Creating SSH key...
	I0819 11:09:23.485258   13169 main.go:141] libmachine: Creating Disk image...
	I0819 11:09:23.485264   13169 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:09:23.485464   13169 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/ha-046000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/ha-046000/disk.qcow2
	I0819 11:09:23.494616   13169 main.go:141] libmachine: STDOUT: 
	I0819 11:09:23.494637   13169 main.go:141] libmachine: STDERR: 
	I0819 11:09:23.494688   13169 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/ha-046000/disk.qcow2 +20000M
	I0819 11:09:23.502690   13169 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:09:23.502705   13169 main.go:141] libmachine: STDERR: 
	I0819 11:09:23.502724   13169 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/ha-046000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/ha-046000/disk.qcow2
	I0819 11:09:23.502730   13169 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:09:23.502738   13169 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:09:23.502761   13169 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/ha-046000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/ha-046000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/ha-046000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:8a:2e:bf:7f:8c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/ha-046000/disk.qcow2
	I0819 11:09:23.504313   13169 main.go:141] libmachine: STDOUT: 
	I0819 11:09:23.504328   13169 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:09:23.504346   13169 client.go:171] duration metric: took 237.753875ms to LocalClient.Create
	I0819 11:09:25.506520   13169 start.go:128] duration metric: took 2.260639666s to createHost
	I0819 11:09:25.506608   13169 start.go:83] releasing machines lock for "ha-046000", held for 2.260783708s
	W0819 11:09:25.506706   13169 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:09:25.520921   13169 out.go:177] * Deleting "ha-046000" in qemu2 ...
	W0819 11:09:25.553156   13169 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:09:25.553240   13169 start.go:729] Will try again in 5 seconds ...
	I0819 11:09:30.555375   13169 start.go:360] acquireMachinesLock for ha-046000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:09:30.555773   13169 start.go:364] duration metric: took 317.917µs to acquireMachinesLock for "ha-046000"
	I0819 11:09:30.555887   13169 start.go:93] Provisioning new machine with config: &{Name:ha-046000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.0 ClusterName:ha-046000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:09:30.556212   13169 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:09:30.568883   13169 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 11:09:30.618987   13169 start.go:159] libmachine.API.Create for "ha-046000" (driver="qemu2")
	I0819 11:09:30.619048   13169 client.go:168] LocalClient.Create starting
	I0819 11:09:30.619209   13169 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:09:30.619285   13169 main.go:141] libmachine: Decoding PEM data...
	I0819 11:09:30.619306   13169 main.go:141] libmachine: Parsing certificate...
	I0819 11:09:30.619369   13169 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:09:30.619414   13169 main.go:141] libmachine: Decoding PEM data...
	I0819 11:09:30.619425   13169 main.go:141] libmachine: Parsing certificate...
	I0819 11:09:30.620085   13169 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:09:30.783762   13169 main.go:141] libmachine: Creating SSH key...
	I0819 11:09:30.916391   13169 main.go:141] libmachine: Creating Disk image...
	I0819 11:09:30.916398   13169 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:09:30.916644   13169 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/ha-046000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/ha-046000/disk.qcow2
	I0819 11:09:30.926170   13169 main.go:141] libmachine: STDOUT: 
	I0819 11:09:30.926187   13169 main.go:141] libmachine: STDERR: 
	I0819 11:09:30.926236   13169 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/ha-046000/disk.qcow2 +20000M
	I0819 11:09:30.934230   13169 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:09:30.934250   13169 main.go:141] libmachine: STDERR: 
	I0819 11:09:30.934260   13169 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/ha-046000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/ha-046000/disk.qcow2
	I0819 11:09:30.934264   13169 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:09:30.934274   13169 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:09:30.934301   13169 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/ha-046000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/ha-046000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/ha-046000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:5f:30:7a:34:83 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/ha-046000/disk.qcow2
	I0819 11:09:30.935974   13169 main.go:141] libmachine: STDOUT: 
	I0819 11:09:30.935989   13169 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:09:30.936001   13169 client.go:171] duration metric: took 316.937ms to LocalClient.Create
	I0819 11:09:32.938163   13169 start.go:128] duration metric: took 2.381916875s to createHost
	I0819 11:09:32.938232   13169 start.go:83] releasing machines lock for "ha-046000", held for 2.382446667s
	W0819 11:09:32.938682   13169 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-046000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-046000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:09:32.954296   13169 out.go:201] 
	W0819 11:09:32.957375   13169 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:09:32.957434   13169 out.go:270] * 
	* 
	W0819 11:09:32.959821   13169 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:09:32.968250   13169 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-046000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-046000 -n ha-046000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-046000 -n ha-046000: exit status 7 (67.802792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-046000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (9.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (82.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-046000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-046000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (61.10375ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-046000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-046000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-046000 -- rollout status deployment/busybox: exit status 1 (58.320083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-046000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-046000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-046000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.934042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-046000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-046000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-046000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.471667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-046000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-046000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-046000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.312875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-046000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-046000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-046000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.076625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-046000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-046000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-046000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.135083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-046000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-046000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-046000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.983167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-046000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-046000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-046000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.941542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-046000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-046000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-046000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.526917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-046000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-046000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-046000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.696959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-046000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-046000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-046000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.238333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-046000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-046000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-046000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.236375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-046000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-046000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-046000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.437708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-046000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-046000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-046000 -- exec  -- nslookup kubernetes.default: exit status 1 (57.672833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-046000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-046000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-046000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.535875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-046000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-046000 -n ha-046000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-046000 -n ha-046000: exit status 7 (30.40675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-046000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (82.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-046000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-046000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.518917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-046000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-046000 -n ha-046000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-046000 -n ha-046000: exit status 7 (30.003708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-046000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-046000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-046000 -v=7 --alsologtostderr: exit status 83 (44.752334ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-046000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-046000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:10:55.688467   13314 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:10:55.689069   13314 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:10:55.689073   13314 out.go:358] Setting ErrFile to fd 2...
	I0819 11:10:55.689076   13314 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:10:55.689222   13314 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:10:55.689449   13314 mustload.go:65] Loading cluster: ha-046000
	I0819 11:10:55.689620   13314 config.go:182] Loaded profile config "ha-046000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:10:55.693619   13314 out.go:177] * The control-plane node ha-046000 host is not running: state=Stopped
	I0819 11:10:55.697475   13314 out.go:177]   To start a cluster, run: "minikube start -p ha-046000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-046000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-046000 -n ha-046000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-046000 -n ha-046000: exit status 7 (30.420583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-046000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-046000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-046000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.699625ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-046000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-046000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-046000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-046000 -n ha-046000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-046000 -n ha-046000: exit status 7 (31.087334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-046000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-046000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-046000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-046000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-046000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-046000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-046000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-046000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-046000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-046000 -n ha-046000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-046000 -n ha-046000: exit status 7 (30.371208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-046000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-046000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-046000 status --output json -v=7 --alsologtostderr: exit status 7 (30.178708ms)

                                                
                                                
-- stdout --
	{"Name":"ha-046000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:10:55.898642   13326 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:10:55.898790   13326 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:10:55.898793   13326 out.go:358] Setting ErrFile to fd 2...
	I0819 11:10:55.898795   13326 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:10:55.898924   13326 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:10:55.899036   13326 out.go:352] Setting JSON to true
	I0819 11:10:55.899046   13326 mustload.go:65] Loading cluster: ha-046000
	I0819 11:10:55.899109   13326 notify.go:220] Checking for updates...
	I0819 11:10:55.899264   13326 config.go:182] Loaded profile config "ha-046000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:10:55.899270   13326 status.go:255] checking status of ha-046000 ...
	I0819 11:10:55.899487   13326 status.go:330] ha-046000 host status = "Stopped" (err=<nil>)
	I0819 11:10:55.899491   13326 status.go:343] host is not running, skipping remaining checks
	I0819 11:10:55.899493   13326 status.go:257] ha-046000 status: &{Name:ha-046000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-046000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-046000 -n ha-046000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-046000 -n ha-046000: exit status 7 (30.341625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-046000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-046000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-046000 node stop m02 -v=7 --alsologtostderr: exit status 85 (48.851916ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:10:55.959930   13330 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:10:55.960272   13330 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:10:55.960277   13330 out.go:358] Setting ErrFile to fd 2...
	I0819 11:10:55.960280   13330 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:10:55.960454   13330 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:10:55.960699   13330 mustload.go:65] Loading cluster: ha-046000
	I0819 11:10:55.960902   13330 config.go:182] Loaded profile config "ha-046000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:10:55.965518   13330 out.go:201] 
	W0819 11:10:55.969449   13330 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0819 11:10:55.969454   13330 out.go:270] * 
	* 
	W0819 11:10:55.971605   13330 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:10:55.975503   13330 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-046000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-046000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-046000 status -v=7 --alsologtostderr: exit status 7 (30.698375ms)

                                                
                                                
-- stdout --
	ha-046000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:10:56.009290   13332 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:10:56.009425   13332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:10:56.009429   13332 out.go:358] Setting ErrFile to fd 2...
	I0819 11:10:56.009431   13332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:10:56.009570   13332 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:10:56.009694   13332 out.go:352] Setting JSON to false
	I0819 11:10:56.009705   13332 mustload.go:65] Loading cluster: ha-046000
	I0819 11:10:56.009760   13332 notify.go:220] Checking for updates...
	I0819 11:10:56.009902   13332 config.go:182] Loaded profile config "ha-046000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:10:56.009908   13332 status.go:255] checking status of ha-046000 ...
	I0819 11:10:56.010120   13332 status.go:330] ha-046000 host status = "Stopped" (err=<nil>)
	I0819 11:10:56.010123   13332 status.go:343] host is not running, skipping remaining checks
	I0819 11:10:56.010126   13332 status.go:257] ha-046000 status: &{Name:ha-046000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-046000 status -v=7 --alsologtostderr": ha-046000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-046000 status -v=7 --alsologtostderr": ha-046000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-046000 status -v=7 --alsologtostderr": ha-046000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-046000 status -v=7 --alsologtostderr": ha-046000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-046000 -n ha-046000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-046000 -n ha-046000: exit status 7 (30.63425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-046000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-046000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-046000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-046000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-046000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-046000 -n ha-046000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-046000 -n ha-046000: exit status 7 (34.361542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-046000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (53.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-046000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-046000 node start m02 -v=7 --alsologtostderr: exit status 85 (47.057084ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:10:56.151064   13341 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:10:56.151461   13341 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:10:56.151465   13341 out.go:358] Setting ErrFile to fd 2...
	I0819 11:10:56.151467   13341 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:10:56.151624   13341 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:10:56.151839   13341 mustload.go:65] Loading cluster: ha-046000
	I0819 11:10:56.152013   13341 config.go:182] Loaded profile config "ha-046000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:10:56.156461   13341 out.go:201] 
	W0819 11:10:56.160480   13341 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0819 11:10:56.160485   13341 out.go:270] * 
	* 
	W0819 11:10:56.162499   13341 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:10:56.165436   13341 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:422: I0819 11:10:56.151064   13341 out.go:345] Setting OutFile to fd 1 ...
I0819 11:10:56.151461   13341 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:10:56.151465   13341 out.go:358] Setting ErrFile to fd 2...
I0819 11:10:56.151467   13341 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:10:56.151624   13341 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
I0819 11:10:56.151839   13341 mustload.go:65] Loading cluster: ha-046000
I0819 11:10:56.152013   13341 config.go:182] Loaded profile config "ha-046000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 11:10:56.156461   13341 out.go:201] 
W0819 11:10:56.160480   13341 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0819 11:10:56.160485   13341 out.go:270] * 
* 
W0819 11:10:56.162499   13341 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0819 11:10:56.165436   13341 out.go:201] 

                                                
                                                
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-046000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-046000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-046000 status -v=7 --alsologtostderr: exit status 7 (30.73075ms)

                                                
                                                
-- stdout --
	ha-046000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:10:56.198240   13343 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:10:56.198411   13343 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:10:56.198415   13343 out.go:358] Setting ErrFile to fd 2...
	I0819 11:10:56.198417   13343 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:10:56.198543   13343 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:10:56.198679   13343 out.go:352] Setting JSON to false
	I0819 11:10:56.198691   13343 mustload.go:65] Loading cluster: ha-046000
	I0819 11:10:56.198782   13343 notify.go:220] Checking for updates...
	I0819 11:10:56.198880   13343 config.go:182] Loaded profile config "ha-046000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:10:56.198891   13343 status.go:255] checking status of ha-046000 ...
	I0819 11:10:56.199107   13343 status.go:330] ha-046000 host status = "Stopped" (err=<nil>)
	I0819 11:10:56.199112   13343 status.go:343] host is not running, skipping remaining checks
	I0819 11:10:56.199114   13343 status.go:257] ha-046000 status: &{Name:ha-046000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-046000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-046000 status -v=7 --alsologtostderr: exit status 7 (74.79ms)

                                                
                                                
-- stdout --
	ha-046000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:10:57.300797   13347 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:10:57.301000   13347 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:10:57.301004   13347 out.go:358] Setting ErrFile to fd 2...
	I0819 11:10:57.301008   13347 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:10:57.301190   13347 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:10:57.301359   13347 out.go:352] Setting JSON to false
	I0819 11:10:57.301374   13347 mustload.go:65] Loading cluster: ha-046000
	I0819 11:10:57.301414   13347 notify.go:220] Checking for updates...
	I0819 11:10:57.301651   13347 config.go:182] Loaded profile config "ha-046000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:10:57.301661   13347 status.go:255] checking status of ha-046000 ...
	I0819 11:10:57.301921   13347 status.go:330] ha-046000 host status = "Stopped" (err=<nil>)
	I0819 11:10:57.301926   13347 status.go:343] host is not running, skipping remaining checks
	I0819 11:10:57.301929   13347 status.go:257] ha-046000 status: &{Name:ha-046000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-046000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-046000 status -v=7 --alsologtostderr: exit status 7 (75.183625ms)

                                                
                                                
-- stdout --
	ha-046000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:10:58.989037   13349 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:10:58.989520   13349 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:10:58.989541   13349 out.go:358] Setting ErrFile to fd 2...
	I0819 11:10:58.989549   13349 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:10:58.990154   13349 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:10:58.990474   13349 out.go:352] Setting JSON to false
	I0819 11:10:58.990498   13349 mustload.go:65] Loading cluster: ha-046000
	I0819 11:10:58.990523   13349 notify.go:220] Checking for updates...
	I0819 11:10:58.990757   13349 config.go:182] Loaded profile config "ha-046000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:10:58.990772   13349 status.go:255] checking status of ha-046000 ...
	I0819 11:10:58.991040   13349 status.go:330] ha-046000 host status = "Stopped" (err=<nil>)
	I0819 11:10:58.991046   13349 status.go:343] host is not running, skipping remaining checks
	I0819 11:10:58.991049   13349 status.go:257] ha-046000 status: &{Name:ha-046000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-046000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-046000 status -v=7 --alsologtostderr: exit status 7 (75.854875ms)

                                                
                                                
-- stdout --
	ha-046000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:11:02.028289   13355 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:11:02.028514   13355 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:11:02.028519   13355 out.go:358] Setting ErrFile to fd 2...
	I0819 11:11:02.028522   13355 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:11:02.028687   13355 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:11:02.028879   13355 out.go:352] Setting JSON to false
	I0819 11:11:02.028894   13355 mustload.go:65] Loading cluster: ha-046000
	I0819 11:11:02.028938   13355 notify.go:220] Checking for updates...
	I0819 11:11:02.029149   13355 config.go:182] Loaded profile config "ha-046000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:11:02.029157   13355 status.go:255] checking status of ha-046000 ...
	I0819 11:11:02.029451   13355 status.go:330] ha-046000 host status = "Stopped" (err=<nil>)
	I0819 11:11:02.029456   13355 status.go:343] host is not running, skipping remaining checks
	I0819 11:11:02.029459   13355 status.go:257] ha-046000 status: &{Name:ha-046000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-046000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-046000 status -v=7 --alsologtostderr: exit status 7 (73.890791ms)

                                                
                                                
-- stdout --
	ha-046000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:11:04.682763   13359 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:11:04.682956   13359 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:11:04.682967   13359 out.go:358] Setting ErrFile to fd 2...
	I0819 11:11:04.682970   13359 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:11:04.683133   13359 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:11:04.683304   13359 out.go:352] Setting JSON to false
	I0819 11:11:04.683318   13359 mustload.go:65] Loading cluster: ha-046000
	I0819 11:11:04.683371   13359 notify.go:220] Checking for updates...
	I0819 11:11:04.683569   13359 config.go:182] Loaded profile config "ha-046000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:11:04.683577   13359 status.go:255] checking status of ha-046000 ...
	I0819 11:11:04.683855   13359 status.go:330] ha-046000 host status = "Stopped" (err=<nil>)
	I0819 11:11:04.683860   13359 status.go:343] host is not running, skipping remaining checks
	I0819 11:11:04.683863   13359 status.go:257] ha-046000 status: &{Name:ha-046000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-046000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-046000 status -v=7 --alsologtostderr: exit status 7 (72.989583ms)

                                                
                                                
-- stdout --
	ha-046000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:11:08.357441   13365 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:11:08.357852   13365 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:11:08.357858   13365 out.go:358] Setting ErrFile to fd 2...
	I0819 11:11:08.357861   13365 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:11:08.358093   13365 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:11:08.358294   13365 out.go:352] Setting JSON to false
	I0819 11:11:08.358307   13365 mustload.go:65] Loading cluster: ha-046000
	I0819 11:11:08.358557   13365 notify.go:220] Checking for updates...
	I0819 11:11:08.358920   13365 config.go:182] Loaded profile config "ha-046000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:11:08.358937   13365 status.go:255] checking status of ha-046000 ...
	I0819 11:11:08.359214   13365 status.go:330] ha-046000 host status = "Stopped" (err=<nil>)
	I0819 11:11:08.359220   13365 status.go:343] host is not running, skipping remaining checks
	I0819 11:11:08.359223   13365 status.go:257] ha-046000 status: &{Name:ha-046000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-046000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-046000 status -v=7 --alsologtostderr: exit status 7 (74.635292ms)

                                                
                                                
-- stdout --
	ha-046000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:11:18.931944   13376 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:11:18.932164   13376 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:11:18.932169   13376 out.go:358] Setting ErrFile to fd 2...
	I0819 11:11:18.932172   13376 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:11:18.932339   13376 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:11:18.932497   13376 out.go:352] Setting JSON to false
	I0819 11:11:18.932510   13376 mustload.go:65] Loading cluster: ha-046000
	I0819 11:11:18.932559   13376 notify.go:220] Checking for updates...
	I0819 11:11:18.932811   13376 config.go:182] Loaded profile config "ha-046000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:11:18.932822   13376 status.go:255] checking status of ha-046000 ...
	I0819 11:11:18.933112   13376 status.go:330] ha-046000 host status = "Stopped" (err=<nil>)
	I0819 11:11:18.933117   13376 status.go:343] host is not running, skipping remaining checks
	I0819 11:11:18.933119   13376 status.go:257] ha-046000 status: &{Name:ha-046000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-046000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-046000 status -v=7 --alsologtostderr: exit status 7 (71.119709ms)

                                                
                                                
-- stdout --
	ha-046000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:11:28.123065   13394 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:11:28.123284   13394 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:11:28.123289   13394 out.go:358] Setting ErrFile to fd 2...
	I0819 11:11:28.123293   13394 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:11:28.123471   13394 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:11:28.123650   13394 out.go:352] Setting JSON to false
	I0819 11:11:28.123667   13394 mustload.go:65] Loading cluster: ha-046000
	I0819 11:11:28.123703   13394 notify.go:220] Checking for updates...
	I0819 11:11:28.123969   13394 config.go:182] Loaded profile config "ha-046000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:11:28.123978   13394 status.go:255] checking status of ha-046000 ...
	I0819 11:11:28.124353   13394 status.go:330] ha-046000 host status = "Stopped" (err=<nil>)
	I0819 11:11:28.124359   13394 status.go:343] host is not running, skipping remaining checks
	I0819 11:11:28.124362   13394 status.go:257] ha-046000 status: &{Name:ha-046000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-046000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-046000 status -v=7 --alsologtostderr: exit status 7 (71.404292ms)

                                                
                                                
-- stdout --
	ha-046000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:11:49.483575   13414 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:11:49.483786   13414 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:11:49.483790   13414 out.go:358] Setting ErrFile to fd 2...
	I0819 11:11:49.483794   13414 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:11:49.483959   13414 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:11:49.484110   13414 out.go:352] Setting JSON to false
	I0819 11:11:49.484125   13414 mustload.go:65] Loading cluster: ha-046000
	I0819 11:11:49.484162   13414 notify.go:220] Checking for updates...
	I0819 11:11:49.484369   13414 config.go:182] Loaded profile config "ha-046000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:11:49.484376   13414 status.go:255] checking status of ha-046000 ...
	I0819 11:11:49.484664   13414 status.go:330] ha-046000 host status = "Stopped" (err=<nil>)
	I0819 11:11:49.484668   13414 status.go:343] host is not running, skipping remaining checks
	I0819 11:11:49.484671   13414 status.go:257] ha-046000 status: &{Name:ha-046000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-046000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-046000 -n ha-046000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-046000 -n ha-046000: exit status 7 (33.970333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-046000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (53.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-046000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-046000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-046000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-046000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-046000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-046000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-046000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-046000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-046000 -n ha-046000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-046000 -n ha-046000: exit status 7 (30.809125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-046000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-046000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-046000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-046000 -v=7 --alsologtostderr: (2.934604542s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-046000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-046000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.228326708s)

                                                
                                                
-- stdout --
	* [ha-046000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-046000" primary control-plane node in "ha-046000" cluster
	* Restarting existing qemu2 VM for "ha-046000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-046000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:11:52.629183   13445 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:11:52.629377   13445 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:11:52.629382   13445 out.go:358] Setting ErrFile to fd 2...
	I0819 11:11:52.629385   13445 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:11:52.629568   13445 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:11:52.630848   13445 out.go:352] Setting JSON to false
	I0819 11:11:52.649850   13445 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6079,"bootTime":1724085033,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:11:52.649925   13445 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:11:52.654870   13445 out.go:177] * [ha-046000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:11:52.662786   13445 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 11:11:52.662806   13445 notify.go:220] Checking for updates...
	I0819 11:11:52.668788   13445 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:11:52.671783   13445 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:11:52.675219   13445 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:11:52.678817   13445 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	I0819 11:11:52.681873   13445 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:11:52.685064   13445 config.go:182] Loaded profile config "ha-046000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:11:52.685115   13445 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:11:52.689770   13445 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 11:11:52.696840   13445 start.go:297] selected driver: qemu2
	I0819 11:11:52.696848   13445 start.go:901] validating driver "qemu2" against &{Name:ha-046000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.0 ClusterName:ha-046000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:11:52.696921   13445 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:11:52.699284   13445 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:11:52.699330   13445 cni.go:84] Creating CNI manager for ""
	I0819 11:11:52.699335   13445 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 11:11:52.699387   13445 start.go:340] cluster config:
	{Name:ha-046000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-046000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:11:52.703007   13445 iso.go:125] acquiring lock: {Name:mk1182fa87ba49f1e009b3ded77c456c9e9e8e4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:11:52.710651   13445 out.go:177] * Starting "ha-046000" primary control-plane node in "ha-046000" cluster
	I0819 11:11:52.714743   13445 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:11:52.714757   13445 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:11:52.714768   13445 cache.go:56] Caching tarball of preloaded images
	I0819 11:11:52.714825   13445 preload.go:172] Found /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:11:52.714832   13445 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:11:52.714895   13445 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/ha-046000/config.json ...
	I0819 11:11:52.715239   13445 start.go:360] acquireMachinesLock for ha-046000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:11:52.715290   13445 start.go:364] duration metric: took 30.666µs to acquireMachinesLock for "ha-046000"
	I0819 11:11:52.715299   13445 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:11:52.715305   13445 fix.go:54] fixHost starting: 
	I0819 11:11:52.715434   13445 fix.go:112] recreateIfNeeded on ha-046000: state=Stopped err=<nil>
	W0819 11:11:52.715442   13445 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:11:52.719596   13445 out.go:177] * Restarting existing qemu2 VM for "ha-046000" ...
	I0819 11:11:52.727787   13445 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:11:52.727829   13445 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/ha-046000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/ha-046000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/ha-046000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:5f:30:7a:34:83 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/ha-046000/disk.qcow2
	I0819 11:11:52.730033   13445 main.go:141] libmachine: STDOUT: 
	I0819 11:11:52.730054   13445 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:11:52.730085   13445 fix.go:56] duration metric: took 14.779666ms for fixHost
	I0819 11:11:52.730089   13445 start.go:83] releasing machines lock for "ha-046000", held for 14.7945ms
	W0819 11:11:52.730097   13445 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:11:52.730140   13445 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:11:52.730145   13445 start.go:729] Will try again in 5 seconds ...
	I0819 11:11:57.732347   13445 start.go:360] acquireMachinesLock for ha-046000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:11:57.732902   13445 start.go:364] duration metric: took 407.208µs to acquireMachinesLock for "ha-046000"
	I0819 11:11:57.733082   13445 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:11:57.733101   13445 fix.go:54] fixHost starting: 
	I0819 11:11:57.733786   13445 fix.go:112] recreateIfNeeded on ha-046000: state=Stopped err=<nil>
	W0819 11:11:57.733811   13445 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:11:57.742305   13445 out.go:177] * Restarting existing qemu2 VM for "ha-046000" ...
	I0819 11:11:57.747303   13445 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:11:57.747550   13445 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/ha-046000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/ha-046000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/ha-046000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:5f:30:7a:34:83 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/ha-046000/disk.qcow2
	I0819 11:11:57.755413   13445 main.go:141] libmachine: STDOUT: 
	I0819 11:11:57.755463   13445 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:11:57.755546   13445 fix.go:56] duration metric: took 22.448959ms for fixHost
	I0819 11:11:57.755564   13445 start.go:83] releasing machines lock for "ha-046000", held for 22.63925ms
	W0819 11:11:57.755753   13445 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-046000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-046000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:11:57.763201   13445 out.go:201] 
	W0819 11:11:57.767279   13445 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:11:57.767293   13445 out.go:270] * 
	* 
	W0819 11:11:57.768838   13445 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:11:57.778265   13445 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-046000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-046000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-046000 -n ha-046000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-046000 -n ha-046000: exit status 7 (33.417667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-046000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-046000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-046000 node delete m03 -v=7 --alsologtostderr: exit status 83 (41.893167ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-046000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-046000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:11:57.918553   13459 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:11:57.918970   13459 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:11:57.918974   13459 out.go:358] Setting ErrFile to fd 2...
	I0819 11:11:57.918976   13459 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:11:57.919110   13459 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:11:57.919341   13459 mustload.go:65] Loading cluster: ha-046000
	I0819 11:11:57.919540   13459 config.go:182] Loaded profile config "ha-046000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:11:57.923140   13459 out.go:177] * The control-plane node ha-046000 host is not running: state=Stopped
	I0819 11:11:57.927108   13459 out.go:177]   To start a cluster, run: "minikube start -p ha-046000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-046000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-046000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-046000 status -v=7 --alsologtostderr: exit status 7 (30.752834ms)

                                                
                                                
-- stdout --
	ha-046000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:11:57.960921   13461 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:11:57.961059   13461 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:11:57.961065   13461 out.go:358] Setting ErrFile to fd 2...
	I0819 11:11:57.961068   13461 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:11:57.961218   13461 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:11:57.961354   13461 out.go:352] Setting JSON to false
	I0819 11:11:57.961369   13461 mustload.go:65] Loading cluster: ha-046000
	I0819 11:11:57.961438   13461 notify.go:220] Checking for updates...
	I0819 11:11:57.961573   13461 config.go:182] Loaded profile config "ha-046000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:11:57.961579   13461 status.go:255] checking status of ha-046000 ...
	I0819 11:11:57.961779   13461 status.go:330] ha-046000 host status = "Stopped" (err=<nil>)
	I0819 11:11:57.961784   13461 status.go:343] host is not running, skipping remaining checks
	I0819 11:11:57.961786   13461 status.go:257] ha-046000 status: &{Name:ha-046000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-046000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-046000 -n ha-046000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-046000 -n ha-046000: exit status 7 (29.925166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-046000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-046000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-046000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-046000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-046000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-046000 -n ha-046000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-046000 -n ha-046000: exit status 7 (29.92725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-046000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (3.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-046000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-046000 stop -v=7 --alsologtostderr: (3.631955625s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-046000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-046000 status -v=7 --alsologtostderr: exit status 7 (66.853417ms)

                                                
                                                
-- stdout --
	ha-046000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:12:01.768285   13494 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:12:01.768459   13494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:12:01.768463   13494 out.go:358] Setting ErrFile to fd 2...
	I0819 11:12:01.768466   13494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:12:01.768638   13494 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:12:01.768784   13494 out.go:352] Setting JSON to false
	I0819 11:12:01.768800   13494 mustload.go:65] Loading cluster: ha-046000
	I0819 11:12:01.768835   13494 notify.go:220] Checking for updates...
	I0819 11:12:01.769072   13494 config.go:182] Loaded profile config "ha-046000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:12:01.769084   13494 status.go:255] checking status of ha-046000 ...
	I0819 11:12:01.769327   13494 status.go:330] ha-046000 host status = "Stopped" (err=<nil>)
	I0819 11:12:01.769332   13494 status.go:343] host is not running, skipping remaining checks
	I0819 11:12:01.769335   13494 status.go:257] ha-046000 status: &{Name:ha-046000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-046000 status -v=7 --alsologtostderr": ha-046000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-046000 status -v=7 --alsologtostderr": ha-046000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-046000 status -v=7 --alsologtostderr": ha-046000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-046000 -n ha-046000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-046000 -n ha-046000: exit status 7 (31.764334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-046000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (3.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-046000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-046000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.185231042s)

                                                
                                                
-- stdout --
	* [ha-046000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-046000" primary control-plane node in "ha-046000" cluster
	* Restarting existing qemu2 VM for "ha-046000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-046000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:12:01.830542   13498 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:12:01.830660   13498 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:12:01.830663   13498 out.go:358] Setting ErrFile to fd 2...
	I0819 11:12:01.830666   13498 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:12:01.830793   13498 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:12:01.831829   13498 out.go:352] Setting JSON to false
	I0819 11:12:01.848110   13498 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6088,"bootTime":1724085033,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:12:01.848206   13498 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:12:01.853259   13498 out.go:177] * [ha-046000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:12:01.860215   13498 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 11:12:01.860265   13498 notify.go:220] Checking for updates...
	I0819 11:12:01.868167   13498 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:12:01.871206   13498 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:12:01.874149   13498 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:12:01.877155   13498 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	I0819 11:12:01.880227   13498 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:12:01.883479   13498 config.go:182] Loaded profile config "ha-046000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:12:01.883745   13498 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:12:01.888155   13498 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 11:12:01.895212   13498 start.go:297] selected driver: qemu2
	I0819 11:12:01.895219   13498 start.go:901] validating driver "qemu2" against &{Name:ha-046000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.0 ClusterName:ha-046000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:12:01.895277   13498 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:12:01.897671   13498 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:12:01.897698   13498 cni.go:84] Creating CNI manager for ""
	I0819 11:12:01.897703   13498 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 11:12:01.897746   13498 start.go:340] cluster config:
	{Name:ha-046000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-046000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:12:01.901369   13498 iso.go:125] acquiring lock: {Name:mk1182fa87ba49f1e009b3ded77c456c9e9e8e4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:12:01.908139   13498 out.go:177] * Starting "ha-046000" primary control-plane node in "ha-046000" cluster
	I0819 11:12:01.912169   13498 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:12:01.912185   13498 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:12:01.912198   13498 cache.go:56] Caching tarball of preloaded images
	I0819 11:12:01.912256   13498 preload.go:172] Found /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:12:01.912264   13498 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:12:01.912331   13498 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/ha-046000/config.json ...
	I0819 11:12:01.912758   13498 start.go:360] acquireMachinesLock for ha-046000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:12:01.912785   13498 start.go:364] duration metric: took 21.416µs to acquireMachinesLock for "ha-046000"
	I0819 11:12:01.912795   13498 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:12:01.912800   13498 fix.go:54] fixHost starting: 
	I0819 11:12:01.912919   13498 fix.go:112] recreateIfNeeded on ha-046000: state=Stopped err=<nil>
	W0819 11:12:01.912927   13498 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:12:01.916109   13498 out.go:177] * Restarting existing qemu2 VM for "ha-046000" ...
	I0819 11:12:01.924127   13498 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:12:01.924163   13498 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/ha-046000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/ha-046000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/ha-046000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:5f:30:7a:34:83 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/ha-046000/disk.qcow2
	I0819 11:12:01.926277   13498 main.go:141] libmachine: STDOUT: 
	I0819 11:12:01.926304   13498 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:12:01.926328   13498 fix.go:56] duration metric: took 13.529041ms for fixHost
	I0819 11:12:01.926332   13498 start.go:83] releasing machines lock for "ha-046000", held for 13.542709ms
	W0819 11:12:01.926338   13498 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:12:01.926365   13498 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:12:01.926369   13498 start.go:729] Will try again in 5 seconds ...
	I0819 11:12:06.928651   13498 start.go:360] acquireMachinesLock for ha-046000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:12:06.929063   13498 start.go:364] duration metric: took 297.041µs to acquireMachinesLock for "ha-046000"
	I0819 11:12:06.929179   13498 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:12:06.929202   13498 fix.go:54] fixHost starting: 
	I0819 11:12:06.929884   13498 fix.go:112] recreateIfNeeded on ha-046000: state=Stopped err=<nil>
	W0819 11:12:06.929910   13498 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:12:06.934074   13498 out.go:177] * Restarting existing qemu2 VM for "ha-046000" ...
	I0819 11:12:06.942081   13498 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:12:06.942309   13498 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/ha-046000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/ha-046000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/ha-046000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:5f:30:7a:34:83 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/ha-046000/disk.qcow2
	I0819 11:12:06.951294   13498 main.go:141] libmachine: STDOUT: 
	I0819 11:12:06.951352   13498 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:12:06.951426   13498 fix.go:56] duration metric: took 22.226583ms for fixHost
	I0819 11:12:06.951446   13498 start.go:83] releasing machines lock for "ha-046000", held for 22.358083ms
	W0819 11:12:06.951597   13498 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-046000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-046000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:12:06.959098   13498 out.go:201] 
	W0819 11:12:06.963170   13498 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:12:06.963193   13498 out.go:270] * 
	* 
	W0819 11:12:06.965906   13498 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:12:06.974038   13498 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-046000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-046000 -n ha-046000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-046000 -n ha-046000: exit status 7 (68.906417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-046000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-046000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-046000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-046000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-046000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-046000 -n ha-046000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-046000 -n ha-046000: exit status 7 (30.482417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-046000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-046000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-046000 --control-plane -v=7 --alsologtostderr: exit status 83 (41.991958ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-046000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-046000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:12:07.167427   13515 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:12:07.167577   13515 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:12:07.167581   13515 out.go:358] Setting ErrFile to fd 2...
	I0819 11:12:07.167583   13515 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:12:07.167727   13515 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:12:07.167951   13515 mustload.go:65] Loading cluster: ha-046000
	I0819 11:12:07.168147   13515 config.go:182] Loaded profile config "ha-046000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:12:07.171909   13515 out.go:177] * The control-plane node ha-046000 host is not running: state=Stopped
	I0819 11:12:07.175932   13515 out.go:177]   To start a cluster, run: "minikube start -p ha-046000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-046000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-046000 -n ha-046000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-046000 -n ha-046000: exit status 7 (30.645833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-046000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-046000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-046000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-046000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-046000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-046000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-046000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-046000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-046000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-046000 -n ha-046000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-046000 -n ha-046000: exit status 7 (30.448291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-046000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.95s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-672000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-672000 --driver=qemu2 : exit status 80 (9.876922208s)

                                                
                                                
-- stdout --
	* [image-672000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-672000" primary control-plane node in "image-672000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-672000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-672000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-672000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-672000 -n image-672000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-672000 -n image-672000: exit status 7 (68.777333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-672000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.95s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.79s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-680000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-680000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.7917575s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a7808091-7032-42ac-bb8f-64f8df2ea0c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-680000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b79c36e7-7209-4365-9959-dc00f9bc3288","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19468"}}
	{"specversion":"1.0","id":"4ba14538-6eec-4276-a165-60a27eded1b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig"}}
	{"specversion":"1.0","id":"f193f42e-6dd6-4ac8-918f-98bfc59ba75a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"a20de3b6-b6bf-461a-8997-5ed536946a67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fcbcf3fc-fe48-41a9-9122-97cc992c8364","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube"}}
	{"specversion":"1.0","id":"c4710775-567e-4314-aed0-dc5444109317","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"56da2e6e-ab1f-4bfa-a4b6-e95d5ba54284","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e2ad43ea-2d30-4cf7-bd8c-4ab0f1608df2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"4120fdf6-7eb9-42cc-885a-3c4358e69246","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-680000\" primary control-plane node in \"json-output-680000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"392067ab-8dea-4ad3-8804-557078593e5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"87c8bcd7-7119-4444-bd10-15660deef65e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-680000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"6539e066-f311-4e58-8808-19bc29173af6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"02f39ad4-2a4f-4ec5-b0b2-3fa809173ca3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"39909656-1313-407e-8ba1-c372b4b86fe7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-680000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"150f14dc-094d-4d8e-b0aa-824aea2efaa1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"6b78eb8d-c925-480c-93ed-b887bde9e568","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-680000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.79s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-680000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-680000 --output=json --user=testUser: exit status 83 (79.097375ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"80f7d863-d6ff-4905-a095-9a9abb437637","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-680000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"804aff65-c347-4fb9-b464-3b790d962f2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-680000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-680000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-680000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-680000 --output=json --user=testUser: exit status 83 (46.450042ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-680000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-680000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-680000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-680000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.11s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-555000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-555000 --driver=qemu2 : exit status 80 (9.805455417s)

                                                
                                                
-- stdout --
	* [first-555000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-555000" primary control-plane node in "first-555000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-555000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-555000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-555000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-19 11:12:40.917558 -0700 PDT m=+430.174121876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-557000 -n second-557000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-557000 -n second-557000: exit status 85 (83.643542ms)

                                                
                                                
-- stdout --
	* Profile "second-557000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-557000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-557000" host is not running, skipping log retrieval (state="* Profile \"second-557000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-557000\"")
helpers_test.go:175: Cleaning up "second-557000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-557000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-19 11:12:41.111328 -0700 PDT m=+430.367892626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-555000 -n first-555000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-555000 -n first-555000: exit status 7 (30.796875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-555000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-555000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-555000
--- FAIL: TestMinikubeProfile (10.11s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.08s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-371000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-371000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.010890375s)

                                                
                                                
-- stdout --
	* [mount-start-1-371000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-371000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-371000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-371000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-371000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-371000 -n mount-start-1-371000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-371000 -n mount-start-1-371000: exit status 7 (69.730291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-371000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.08s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-540000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-540000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.884719333s)

                                                
                                                
-- stdout --
	* [multinode-540000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-540000" primary control-plane node in "multinode-540000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-540000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:12:51.512851   13692 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:12:51.513226   13692 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:12:51.513231   13692 out.go:358] Setting ErrFile to fd 2...
	I0819 11:12:51.513234   13692 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:12:51.513430   13692 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:12:51.514747   13692 out.go:352] Setting JSON to false
	I0819 11:12:51.531006   13692 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6138,"bootTime":1724085033,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:12:51.531074   13692 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:12:51.536829   13692 out.go:177] * [multinode-540000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:12:51.544745   13692 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 11:12:51.544797   13692 notify.go:220] Checking for updates...
	I0819 11:12:51.554238   13692 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:12:51.557705   13692 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:12:51.560740   13692 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:12:51.563721   13692 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	I0819 11:12:51.566676   13692 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:12:51.569872   13692 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:12:51.573730   13692 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:12:51.580688   13692 start.go:297] selected driver: qemu2
	I0819 11:12:51.580696   13692 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:12:51.580704   13692 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:12:51.582948   13692 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:12:51.585708   13692 out.go:177] * Automatically selected the socket_vmnet network
	I0819 11:12:51.588798   13692 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:12:51.588845   13692 cni.go:84] Creating CNI manager for ""
	I0819 11:12:51.588848   13692 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0819 11:12:51.588852   13692 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 11:12:51.588899   13692 start.go:340] cluster config:
	{Name:multinode-540000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-540000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:12:51.592484   13692 iso.go:125] acquiring lock: {Name:mk1182fa87ba49f1e009b3ded77c456c9e9e8e4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:12:51.599501   13692 out.go:177] * Starting "multinode-540000" primary control-plane node in "multinode-540000" cluster
	I0819 11:12:51.603679   13692 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:12:51.603721   13692 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:12:51.603734   13692 cache.go:56] Caching tarball of preloaded images
	I0819 11:12:51.603796   13692 preload.go:172] Found /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:12:51.603803   13692 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:12:51.604046   13692 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/multinode-540000/config.json ...
	I0819 11:12:51.604058   13692 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/multinode-540000/config.json: {Name:mke2cba87f09ce9ac90a24ae5a563a4210466550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:12:51.604284   13692 start.go:360] acquireMachinesLock for multinode-540000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:12:51.604327   13692 start.go:364] duration metric: took 37µs to acquireMachinesLock for "multinode-540000"
	I0819 11:12:51.604340   13692 start.go:93] Provisioning new machine with config: &{Name:multinode-540000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:multinode-540000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:12:51.604368   13692 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:12:51.612682   13692 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 11:12:51.630714   13692 start.go:159] libmachine.API.Create for "multinode-540000" (driver="qemu2")
	I0819 11:12:51.630748   13692 client.go:168] LocalClient.Create starting
	I0819 11:12:51.630818   13692 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:12:51.630851   13692 main.go:141] libmachine: Decoding PEM data...
	I0819 11:12:51.630860   13692 main.go:141] libmachine: Parsing certificate...
	I0819 11:12:51.630896   13692 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:12:51.630922   13692 main.go:141] libmachine: Decoding PEM data...
	I0819 11:12:51.630934   13692 main.go:141] libmachine: Parsing certificate...
	I0819 11:12:51.631285   13692 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:12:51.785594   13692 main.go:141] libmachine: Creating SSH key...
	I0819 11:12:51.908208   13692 main.go:141] libmachine: Creating Disk image...
	I0819 11:12:51.908214   13692 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:12:51.908458   13692 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/multinode-540000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/multinode-540000/disk.qcow2
	I0819 11:12:51.917899   13692 main.go:141] libmachine: STDOUT: 
	I0819 11:12:51.917915   13692 main.go:141] libmachine: STDERR: 
	I0819 11:12:51.917969   13692 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/multinode-540000/disk.qcow2 +20000M
	I0819 11:12:51.925799   13692 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:12:51.925813   13692 main.go:141] libmachine: STDERR: 
	I0819 11:12:51.925830   13692 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/multinode-540000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/multinode-540000/disk.qcow2
	I0819 11:12:51.925836   13692 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:12:51.925850   13692 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:12:51.925878   13692 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/multinode-540000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/multinode-540000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/multinode-540000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:7b:e6:8e:b3:b9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/multinode-540000/disk.qcow2
	I0819 11:12:51.927425   13692 main.go:141] libmachine: STDOUT: 
	I0819 11:12:51.927438   13692 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:12:51.927457   13692 client.go:171] duration metric: took 296.705375ms to LocalClient.Create
	I0819 11:12:53.929630   13692 start.go:128] duration metric: took 2.325252917s to createHost
	I0819 11:12:53.929694   13692 start.go:83] releasing machines lock for "multinode-540000", held for 2.325369333s
	W0819 11:12:53.929832   13692 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:12:53.940858   13692 out.go:177] * Deleting "multinode-540000" in qemu2 ...
	W0819 11:12:53.975890   13692 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:12:53.975950   13692 start.go:729] Will try again in 5 seconds ...
	I0819 11:12:58.978182   13692 start.go:360] acquireMachinesLock for multinode-540000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:12:58.978641   13692 start.go:364] duration metric: took 360.125µs to acquireMachinesLock for "multinode-540000"
	I0819 11:12:58.978793   13692 start.go:93] Provisioning new machine with config: &{Name:multinode-540000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:multinode-540000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:12:58.979054   13692 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:12:58.987882   13692 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 11:12:59.038040   13692 start.go:159] libmachine.API.Create for "multinode-540000" (driver="qemu2")
	I0819 11:12:59.038097   13692 client.go:168] LocalClient.Create starting
	I0819 11:12:59.038246   13692 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:12:59.038307   13692 main.go:141] libmachine: Decoding PEM data...
	I0819 11:12:59.038324   13692 main.go:141] libmachine: Parsing certificate...
	I0819 11:12:59.038387   13692 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:12:59.038433   13692 main.go:141] libmachine: Decoding PEM data...
	I0819 11:12:59.038444   13692 main.go:141] libmachine: Parsing certificate...
	I0819 11:12:59.039129   13692 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:12:59.201753   13692 main.go:141] libmachine: Creating SSH key...
	I0819 11:12:59.300362   13692 main.go:141] libmachine: Creating Disk image...
	I0819 11:12:59.300370   13692 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:12:59.300594   13692 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/multinode-540000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/multinode-540000/disk.qcow2
	I0819 11:12:59.310047   13692 main.go:141] libmachine: STDOUT: 
	I0819 11:12:59.310062   13692 main.go:141] libmachine: STDERR: 
	I0819 11:12:59.310108   13692 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/multinode-540000/disk.qcow2 +20000M
	I0819 11:12:59.317940   13692 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:12:59.317954   13692 main.go:141] libmachine: STDERR: 
	I0819 11:12:59.317964   13692 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/multinode-540000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/multinode-540000/disk.qcow2
	I0819 11:12:59.317969   13692 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:12:59.317984   13692 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:12:59.318021   13692 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/multinode-540000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/multinode-540000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/multinode-540000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:61:07:48:f6:83 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/multinode-540000/disk.qcow2
	I0819 11:12:59.319586   13692 main.go:141] libmachine: STDOUT: 
	I0819 11:12:59.319599   13692 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:12:59.319612   13692 client.go:171] duration metric: took 281.510334ms to LocalClient.Create
	I0819 11:13:01.321794   13692 start.go:128] duration metric: took 2.342719375s to createHost
	I0819 11:13:01.321848   13692 start.go:83] releasing machines lock for "multinode-540000", held for 2.343197s
	W0819 11:13:01.322238   13692 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-540000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-540000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:13:01.337883   13692 out.go:201] 
	W0819 11:13:01.342079   13692 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:13:01.342130   13692 out.go:270] * 
	* 
	W0819 11:13:01.345091   13692 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:13:01.355830   13692 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-540000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-540000 -n multinode-540000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-540000 -n multinode-540000: exit status 7 (66.129292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-540000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.95s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (68.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-540000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-540000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (65.030292ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-540000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-540000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-540000 -- rollout status deployment/busybox: exit status 1 (58.203917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-540000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-540000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-540000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (57.692541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-540000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-540000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-540000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.517208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-540000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-540000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-540000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.997792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-540000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-540000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-540000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.281708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-540000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-540000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-540000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.456125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-540000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-540000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-540000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.021917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-540000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-540000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-540000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.8295ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-540000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-540000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-540000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.695292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-540000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-540000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-540000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.60875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-540000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-540000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-540000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.170875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-540000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-540000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-540000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.824417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-540000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-540000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-540000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.632875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-540000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-540000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-540000 -- exec  -- nslookup kubernetes.default: exit status 1 (57.372666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-540000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-540000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-540000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.362833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-540000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-540000 -n multinode-540000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-540000 -n multinode-540000: exit status 7 (30.540917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-540000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (68.90s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-540000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-540000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.01925ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-540000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-540000 -n multinode-540000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-540000 -n multinode-540000: exit status 7 (30.958458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-540000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-540000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-540000 -v 3 --alsologtostderr: exit status 83 (43.316833ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-540000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-540000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:14:10.456563   13823 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:14:10.456731   13823 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:14:10.456734   13823 out.go:358] Setting ErrFile to fd 2...
	I0819 11:14:10.456736   13823 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:14:10.456864   13823 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:14:10.457114   13823 mustload.go:65] Loading cluster: multinode-540000
	I0819 11:14:10.457301   13823 config.go:182] Loaded profile config "multinode-540000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:14:10.462173   13823 out.go:177] * The control-plane node multinode-540000 host is not running: state=Stopped
	I0819 11:14:10.466057   13823 out.go:177]   To start a cluster, run: "minikube start -p multinode-540000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-540000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-540000 -n multinode-540000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-540000 -n multinode-540000: exit status 7 (30.094875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-540000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-540000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-540000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.354791ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-540000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-540000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-540000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-540000 -n multinode-540000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-540000 -n multinode-540000: exit status 7 (30.217166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-540000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-540000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-540000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-540000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"multinode-540000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-540000 -n multinode-540000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-540000 -n multinode-540000: exit status 7 (30.095959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-540000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-540000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-540000 status --output json --alsologtostderr: exit status 7 (30.046792ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-540000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:14:10.664731   13835 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:14:10.664868   13835 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:14:10.664871   13835 out.go:358] Setting ErrFile to fd 2...
	I0819 11:14:10.664874   13835 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:14:10.665002   13835 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:14:10.665117   13835 out.go:352] Setting JSON to true
	I0819 11:14:10.665128   13835 mustload.go:65] Loading cluster: multinode-540000
	I0819 11:14:10.665193   13835 notify.go:220] Checking for updates...
	I0819 11:14:10.665320   13835 config.go:182] Loaded profile config "multinode-540000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:14:10.665327   13835 status.go:255] checking status of multinode-540000 ...
	I0819 11:14:10.665522   13835 status.go:330] multinode-540000 host status = "Stopped" (err=<nil>)
	I0819 11:14:10.665526   13835 status.go:343] host is not running, skipping remaining checks
	I0819 11:14:10.665528   13835 status.go:257] multinode-540000 status: &{Name:multinode-540000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-540000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-540000 -n multinode-540000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-540000 -n multinode-540000: exit status 7 (30.789084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-540000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-540000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-540000 node stop m03: exit status 85 (46.940542ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-540000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-540000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-540000 status: exit status 7 (30.285833ms)

                                                
                                                
-- stdout --
	multinode-540000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-540000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-540000 status --alsologtostderr: exit status 7 (29.888375ms)

                                                
                                                
-- stdout --
	multinode-540000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:14:10.803576   13843 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:14:10.803718   13843 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:14:10.803721   13843 out.go:358] Setting ErrFile to fd 2...
	I0819 11:14:10.803723   13843 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:14:10.803848   13843 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:14:10.803964   13843 out.go:352] Setting JSON to false
	I0819 11:14:10.803975   13843 mustload.go:65] Loading cluster: multinode-540000
	I0819 11:14:10.804025   13843 notify.go:220] Checking for updates...
	I0819 11:14:10.804183   13843 config.go:182] Loaded profile config "multinode-540000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:14:10.804189   13843 status.go:255] checking status of multinode-540000 ...
	I0819 11:14:10.804415   13843 status.go:330] multinode-540000 host status = "Stopped" (err=<nil>)
	I0819 11:14:10.804419   13843 status.go:343] host is not running, skipping remaining checks
	I0819 11:14:10.804421   13843 status.go:257] multinode-540000 status: &{Name:multinode-540000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-540000 status --alsologtostderr": multinode-540000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-540000 -n multinode-540000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-540000 -n multinode-540000: exit status 7 (29.779792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-540000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (37.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-540000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-540000 node start m03 -v=7 --alsologtostderr: exit status 85 (48.369292ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:14:10.864174   13847 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:14:10.864534   13847 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:14:10.864539   13847 out.go:358] Setting ErrFile to fd 2...
	I0819 11:14:10.864541   13847 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:14:10.864711   13847 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:14:10.864941   13847 mustload.go:65] Loading cluster: multinode-540000
	I0819 11:14:10.865126   13847 config.go:182] Loaded profile config "multinode-540000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:14:10.869020   13847 out.go:201] 
	W0819 11:14:10.872945   13847 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0819 11:14:10.872950   13847 out.go:270] * 
	* 
	W0819 11:14:10.875007   13847 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:14:10.879007   13847 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0819 11:14:10.864174   13847 out.go:345] Setting OutFile to fd 1 ...
I0819 11:14:10.864534   13847 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:14:10.864539   13847 out.go:358] Setting ErrFile to fd 2...
I0819 11:14:10.864541   13847 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:14:10.864711   13847 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
I0819 11:14:10.864941   13847 mustload.go:65] Loading cluster: multinode-540000
I0819 11:14:10.865126   13847 config.go:182] Loaded profile config "multinode-540000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 11:14:10.869020   13847 out.go:201] 
W0819 11:14:10.872945   13847 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0819 11:14:10.872950   13847 out.go:270] * 
* 
W0819 11:14:10.875007   13847 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0819 11:14:10.879007   13847 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-540000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-540000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-540000 status -v=7 --alsologtostderr: exit status 7 (30.181ms)

                                                
                                                
-- stdout --
	multinode-540000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:14:10.912487   13849 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:14:10.912630   13849 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:14:10.912633   13849 out.go:358] Setting ErrFile to fd 2...
	I0819 11:14:10.912636   13849 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:14:10.912755   13849 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:14:10.912866   13849 out.go:352] Setting JSON to false
	I0819 11:14:10.912880   13849 mustload.go:65] Loading cluster: multinode-540000
	I0819 11:14:10.912940   13849 notify.go:220] Checking for updates...
	I0819 11:14:10.913067   13849 config.go:182] Loaded profile config "multinode-540000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:14:10.913073   13849 status.go:255] checking status of multinode-540000 ...
	I0819 11:14:10.913278   13849 status.go:330] multinode-540000 host status = "Stopped" (err=<nil>)
	I0819 11:14:10.913282   13849 status.go:343] host is not running, skipping remaining checks
	I0819 11:14:10.913284   13849 status.go:257] multinode-540000 status: &{Name:multinode-540000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-540000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-540000 status -v=7 --alsologtostderr: exit status 7 (73.491667ms)

                                                
                                                
-- stdout --
	multinode-540000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:14:11.807528   13853 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:14:11.807718   13853 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:14:11.807722   13853 out.go:358] Setting ErrFile to fd 2...
	I0819 11:14:11.807726   13853 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:14:11.807893   13853 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:14:11.808045   13853 out.go:352] Setting JSON to false
	I0819 11:14:11.808062   13853 mustload.go:65] Loading cluster: multinode-540000
	I0819 11:14:11.808100   13853 notify.go:220] Checking for updates...
	I0819 11:14:11.808312   13853 config.go:182] Loaded profile config "multinode-540000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:14:11.808319   13853 status.go:255] checking status of multinode-540000 ...
	I0819 11:14:11.808593   13853 status.go:330] multinode-540000 host status = "Stopped" (err=<nil>)
	I0819 11:14:11.808598   13853 status.go:343] host is not running, skipping remaining checks
	I0819 11:14:11.808601   13853 status.go:257] multinode-540000 status: &{Name:multinode-540000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-540000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-540000 status -v=7 --alsologtostderr: exit status 7 (74.496292ms)

                                                
                                                
-- stdout --
	multinode-540000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:14:13.576102   13855 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:14:13.576329   13855 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:14:13.576334   13855 out.go:358] Setting ErrFile to fd 2...
	I0819 11:14:13.576337   13855 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:14:13.576502   13855 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:14:13.576661   13855 out.go:352] Setting JSON to false
	I0819 11:14:13.576674   13855 mustload.go:65] Loading cluster: multinode-540000
	I0819 11:14:13.576712   13855 notify.go:220] Checking for updates...
	I0819 11:14:13.576937   13855 config.go:182] Loaded profile config "multinode-540000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:14:13.576945   13855 status.go:255] checking status of multinode-540000 ...
	I0819 11:14:13.577221   13855 status.go:330] multinode-540000 host status = "Stopped" (err=<nil>)
	I0819 11:14:13.577226   13855 status.go:343] host is not running, skipping remaining checks
	I0819 11:14:13.577229   13855 status.go:257] multinode-540000 status: &{Name:multinode-540000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-540000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-540000 status -v=7 --alsologtostderr: exit status 7 (74.940917ms)

                                                
                                                
-- stdout --
	multinode-540000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:14:15.383347   13857 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:14:15.383547   13857 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:14:15.383552   13857 out.go:358] Setting ErrFile to fd 2...
	I0819 11:14:15.383555   13857 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:14:15.383727   13857 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:14:15.383904   13857 out.go:352] Setting JSON to false
	I0819 11:14:15.383919   13857 mustload.go:65] Loading cluster: multinode-540000
	I0819 11:14:15.383958   13857 notify.go:220] Checking for updates...
	I0819 11:14:15.384196   13857 config.go:182] Loaded profile config "multinode-540000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:14:15.384205   13857 status.go:255] checking status of multinode-540000 ...
	I0819 11:14:15.384474   13857 status.go:330] multinode-540000 host status = "Stopped" (err=<nil>)
	I0819 11:14:15.384479   13857 status.go:343] host is not running, skipping remaining checks
	I0819 11:14:15.384482   13857 status.go:257] multinode-540000 status: &{Name:multinode-540000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-540000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-540000 status -v=7 --alsologtostderr: exit status 7 (75.365417ms)

                                                
                                                
-- stdout --
	multinode-540000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:14:19.051478   13861 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:14:19.051661   13861 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:14:19.051666   13861 out.go:358] Setting ErrFile to fd 2...
	I0819 11:14:19.051670   13861 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:14:19.051895   13861 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:14:19.052061   13861 out.go:352] Setting JSON to false
	I0819 11:14:19.052076   13861 mustload.go:65] Loading cluster: multinode-540000
	I0819 11:14:19.052115   13861 notify.go:220] Checking for updates...
	I0819 11:14:19.052360   13861 config.go:182] Loaded profile config "multinode-540000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:14:19.052368   13861 status.go:255] checking status of multinode-540000 ...
	I0819 11:14:19.052669   13861 status.go:330] multinode-540000 host status = "Stopped" (err=<nil>)
	I0819 11:14:19.052674   13861 status.go:343] host is not running, skipping remaining checks
	I0819 11:14:19.052677   13861 status.go:257] multinode-540000 status: &{Name:multinode-540000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-540000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-540000 status -v=7 --alsologtostderr: exit status 7 (73.2985ms)

                                                
                                                
-- stdout --
	multinode-540000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:14:26.133725   13867 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:14:26.133919   13867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:14:26.133923   13867 out.go:358] Setting ErrFile to fd 2...
	I0819 11:14:26.133926   13867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:14:26.134099   13867 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:14:26.134255   13867 out.go:352] Setting JSON to false
	I0819 11:14:26.134269   13867 mustload.go:65] Loading cluster: multinode-540000
	I0819 11:14:26.134303   13867 notify.go:220] Checking for updates...
	I0819 11:14:26.134532   13867 config.go:182] Loaded profile config "multinode-540000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:14:26.134542   13867 status.go:255] checking status of multinode-540000 ...
	I0819 11:14:26.134843   13867 status.go:330] multinode-540000 host status = "Stopped" (err=<nil>)
	I0819 11:14:26.134848   13867 status.go:343] host is not running, skipping remaining checks
	I0819 11:14:26.134851   13867 status.go:257] multinode-540000 status: &{Name:multinode-540000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-540000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-540000 status -v=7 --alsologtostderr: exit status 7 (72.402917ms)

                                                
                                                
-- stdout --
	multinode-540000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:14:35.662130   13875 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:14:35.662296   13875 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:14:35.662301   13875 out.go:358] Setting ErrFile to fd 2...
	I0819 11:14:35.662304   13875 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:14:35.662478   13875 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:14:35.662635   13875 out.go:352] Setting JSON to false
	I0819 11:14:35.662648   13875 mustload.go:65] Loading cluster: multinode-540000
	I0819 11:14:35.662678   13875 notify.go:220] Checking for updates...
	I0819 11:14:35.662909   13875 config.go:182] Loaded profile config "multinode-540000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:14:35.662917   13875 status.go:255] checking status of multinode-540000 ...
	I0819 11:14:35.663181   13875 status.go:330] multinode-540000 host status = "Stopped" (err=<nil>)
	I0819 11:14:35.663186   13875 status.go:343] host is not running, skipping remaining checks
	I0819 11:14:35.663189   13875 status.go:257] multinode-540000 status: &{Name:multinode-540000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-540000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-540000 status -v=7 --alsologtostderr: exit status 7 (76.169ms)

                                                
                                                
-- stdout --
	multinode-540000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:14:48.433868   13896 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:14:48.434022   13896 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:14:48.434030   13896 out.go:358] Setting ErrFile to fd 2...
	I0819 11:14:48.434033   13896 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:14:48.434210   13896 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:14:48.434376   13896 out.go:352] Setting JSON to false
	I0819 11:14:48.434393   13896 mustload.go:65] Loading cluster: multinode-540000
	I0819 11:14:48.434423   13896 notify.go:220] Checking for updates...
	I0819 11:14:48.434638   13896 config.go:182] Loaded profile config "multinode-540000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:14:48.434647   13896 status.go:255] checking status of multinode-540000 ...
	I0819 11:14:48.434920   13896 status.go:330] multinode-540000 host status = "Stopped" (err=<nil>)
	I0819 11:14:48.434925   13896 status.go:343] host is not running, skipping remaining checks
	I0819 11:14:48.434928   13896 status.go:257] multinode-540000 status: &{Name:multinode-540000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-540000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-540000 -n multinode-540000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-540000 -n multinode-540000: exit status 7 (32.803833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-540000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (37.63s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-540000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-540000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-540000: (3.625821542s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-540000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-540000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.22703725s)

                                                
                                                
-- stdout --
	* [multinode-540000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-540000" primary control-plane node in "multinode-540000" cluster
	* Restarting existing qemu2 VM for "multinode-540000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-540000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:14:52.194350   13922 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:14:52.194506   13922 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:14:52.194516   13922 out.go:358] Setting ErrFile to fd 2...
	I0819 11:14:52.194519   13922 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:14:52.194706   13922 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:14:52.195964   13922 out.go:352] Setting JSON to false
	I0819 11:14:52.215461   13922 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6259,"bootTime":1724085033,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:14:52.215532   13922 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:14:52.221010   13922 out.go:177] * [multinode-540000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:14:52.228910   13922 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 11:14:52.228959   13922 notify.go:220] Checking for updates...
	I0819 11:14:52.234908   13922 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:14:52.237884   13922 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:14:52.240850   13922 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:14:52.243886   13922 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	I0819 11:14:52.246836   13922 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:14:52.250118   13922 config.go:182] Loaded profile config "multinode-540000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:14:52.250166   13922 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:14:52.254871   13922 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 11:14:52.261840   13922 start.go:297] selected driver: qemu2
	I0819 11:14:52.261847   13922 start.go:901] validating driver "qemu2" against &{Name:multinode-540000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:multinode-540000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:14:52.261900   13922 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:14:52.264141   13922 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:14:52.264191   13922 cni.go:84] Creating CNI manager for ""
	I0819 11:14:52.264196   13922 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 11:14:52.264243   13922 start.go:340] cluster config:
	{Name:multinode-540000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-540000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:14:52.267747   13922 iso.go:125] acquiring lock: {Name:mk1182fa87ba49f1e009b3ded77c456c9e9e8e4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:14:52.274841   13922 out.go:177] * Starting "multinode-540000" primary control-plane node in "multinode-540000" cluster
	I0819 11:14:52.278872   13922 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:14:52.278887   13922 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:14:52.278902   13922 cache.go:56] Caching tarball of preloaded images
	I0819 11:14:52.278960   13922 preload.go:172] Found /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:14:52.278965   13922 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:14:52.279040   13922 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/multinode-540000/config.json ...
	I0819 11:14:52.279371   13922 start.go:360] acquireMachinesLock for multinode-540000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:14:52.279405   13922 start.go:364] duration metric: took 27.667µs to acquireMachinesLock for "multinode-540000"
	I0819 11:14:52.279415   13922 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:14:52.279421   13922 fix.go:54] fixHost starting: 
	I0819 11:14:52.279534   13922 fix.go:112] recreateIfNeeded on multinode-540000: state=Stopped err=<nil>
	W0819 11:14:52.279544   13922 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:14:52.283863   13922 out.go:177] * Restarting existing qemu2 VM for "multinode-540000" ...
	I0819 11:14:52.291929   13922 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:14:52.291973   13922 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/multinode-540000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/multinode-540000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/multinode-540000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:61:07:48:f6:83 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/multinode-540000/disk.qcow2
	I0819 11:14:52.293890   13922 main.go:141] libmachine: STDOUT: 
	I0819 11:14:52.293907   13922 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:14:52.293940   13922 fix.go:56] duration metric: took 14.520417ms for fixHost
	I0819 11:14:52.293945   13922 start.go:83] releasing machines lock for "multinode-540000", held for 14.535041ms
	W0819 11:14:52.293952   13922 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:14:52.293995   13922 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:14:52.294000   13922 start.go:729] Will try again in 5 seconds ...
	I0819 11:14:57.296124   13922 start.go:360] acquireMachinesLock for multinode-540000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:14:57.296574   13922 start.go:364] duration metric: took 335.167µs to acquireMachinesLock for "multinode-540000"
	I0819 11:14:57.296692   13922 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:14:57.296714   13922 fix.go:54] fixHost starting: 
	I0819 11:14:57.297449   13922 fix.go:112] recreateIfNeeded on multinode-540000: state=Stopped err=<nil>
	W0819 11:14:57.297477   13922 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:14:57.302071   13922 out.go:177] * Restarting existing qemu2 VM for "multinode-540000" ...
	I0819 11:14:57.309094   13922 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:14:57.309260   13922 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/multinode-540000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/multinode-540000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/multinode-540000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:61:07:48:f6:83 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/multinode-540000/disk.qcow2
	I0819 11:14:57.318324   13922 main.go:141] libmachine: STDOUT: 
	I0819 11:14:57.318394   13922 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:14:57.318525   13922 fix.go:56] duration metric: took 21.811416ms for fixHost
	I0819 11:14:57.318547   13922 start.go:83] releasing machines lock for "multinode-540000", held for 21.945291ms
	W0819 11:14:57.318730   13922 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-540000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-540000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:14:57.326116   13922 out.go:201] 
	W0819 11:14:57.329069   13922 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:14:57.329094   13922 out.go:270] * 
	* 
	W0819 11:14:57.331699   13922 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:14:57.339029   13922 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-540000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-540000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-540000 -n multinode-540000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-540000 -n multinode-540000: exit status 7 (32.89625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-540000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.99s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-540000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-540000 node delete m03: exit status 83 (42.585ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-540000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-540000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-540000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-540000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-540000 status --alsologtostderr: exit status 7 (30.129958ms)

                                                
                                                
-- stdout --
	multinode-540000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:14:57.526044   13942 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:14:57.526178   13942 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:14:57.526182   13942 out.go:358] Setting ErrFile to fd 2...
	I0819 11:14:57.526184   13942 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:14:57.526317   13942 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:14:57.526445   13942 out.go:352] Setting JSON to false
	I0819 11:14:57.526455   13942 mustload.go:65] Loading cluster: multinode-540000
	I0819 11:14:57.526514   13942 notify.go:220] Checking for updates...
	I0819 11:14:57.526635   13942 config.go:182] Loaded profile config "multinode-540000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:14:57.526644   13942 status.go:255] checking status of multinode-540000 ...
	I0819 11:14:57.526856   13942 status.go:330] multinode-540000 host status = "Stopped" (err=<nil>)
	I0819 11:14:57.526860   13942 status.go:343] host is not running, skipping remaining checks
	I0819 11:14:57.526863   13942 status.go:257] multinode-540000 status: &{Name:multinode-540000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-540000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-540000 -n multinode-540000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-540000 -n multinode-540000: exit status 7 (29.91575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-540000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-540000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-540000 stop: (3.22520975s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-540000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-540000 status: exit status 7 (64.111709ms)

                                                
                                                
-- stdout --
	multinode-540000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-540000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-540000 status --alsologtostderr: exit status 7 (33.5025ms)

                                                
                                                
-- stdout --
	multinode-540000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:15:00.879275   13966 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:15:00.879430   13966 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:15:00.879434   13966 out.go:358] Setting ErrFile to fd 2...
	I0819 11:15:00.879436   13966 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:15:00.879577   13966 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:15:00.879691   13966 out.go:352] Setting JSON to false
	I0819 11:15:00.879702   13966 mustload.go:65] Loading cluster: multinode-540000
	I0819 11:15:00.879757   13966 notify.go:220] Checking for updates...
	I0819 11:15:00.879901   13966 config.go:182] Loaded profile config "multinode-540000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:15:00.879907   13966 status.go:255] checking status of multinode-540000 ...
	I0819 11:15:00.880132   13966 status.go:330] multinode-540000 host status = "Stopped" (err=<nil>)
	I0819 11:15:00.880135   13966 status.go:343] host is not running, skipping remaining checks
	I0819 11:15:00.880137   13966 status.go:257] multinode-540000 status: &{Name:multinode-540000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-540000 status --alsologtostderr": multinode-540000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-540000 status --alsologtostderr": multinode-540000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-540000 -n multinode-540000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-540000 -n multinode-540000: exit status 7 (30.135875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-540000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.35s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-540000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-540000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.18296675s)

                                                
                                                
-- stdout --
	* [multinode-540000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-540000" primary control-plane node in "multinode-540000" cluster
	* Restarting existing qemu2 VM for "multinode-540000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-540000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:15:00.939483   13970 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:15:00.939616   13970 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:15:00.939620   13970 out.go:358] Setting ErrFile to fd 2...
	I0819 11:15:00.939623   13970 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:15:00.939757   13970 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:15:00.940778   13970 out.go:352] Setting JSON to false
	I0819 11:15:00.956873   13970 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6267,"bootTime":1724085033,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:15:00.956940   13970 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:15:00.961530   13970 out.go:177] * [multinode-540000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:15:00.968296   13970 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 11:15:00.968347   13970 notify.go:220] Checking for updates...
	I0819 11:15:00.975230   13970 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:15:00.978261   13970 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:15:00.981293   13970 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:15:00.984238   13970 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	I0819 11:15:00.987294   13970 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:15:00.990541   13970 config.go:182] Loaded profile config "multinode-540000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:15:00.990798   13970 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:15:00.994234   13970 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 11:15:01.001311   13970 start.go:297] selected driver: qemu2
	I0819 11:15:01.001319   13970 start.go:901] validating driver "qemu2" against &{Name:multinode-540000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:multinode-540000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:15:01.001385   13970 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:15:01.003689   13970 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:15:01.003713   13970 cni.go:84] Creating CNI manager for ""
	I0819 11:15:01.003719   13970 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 11:15:01.003770   13970 start.go:340] cluster config:
	{Name:multinode-540000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-540000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:15:01.007227   13970 iso.go:125] acquiring lock: {Name:mk1182fa87ba49f1e009b3ded77c456c9e9e8e4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:15:01.014260   13970 out.go:177] * Starting "multinode-540000" primary control-plane node in "multinode-540000" cluster
	I0819 11:15:01.018265   13970 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:15:01.018278   13970 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:15:01.018286   13970 cache.go:56] Caching tarball of preloaded images
	I0819 11:15:01.018336   13970 preload.go:172] Found /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:15:01.018341   13970 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:15:01.018391   13970 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/multinode-540000/config.json ...
	I0819 11:15:01.018812   13970 start.go:360] acquireMachinesLock for multinode-540000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:15:01.018845   13970 start.go:364] duration metric: took 26.584µs to acquireMachinesLock for "multinode-540000"
	I0819 11:15:01.018855   13970 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:15:01.018862   13970 fix.go:54] fixHost starting: 
	I0819 11:15:01.018993   13970 fix.go:112] recreateIfNeeded on multinode-540000: state=Stopped err=<nil>
	W0819 11:15:01.019001   13970 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:15:01.027297   13970 out.go:177] * Restarting existing qemu2 VM for "multinode-540000" ...
	I0819 11:15:01.031258   13970 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:15:01.031311   13970 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/multinode-540000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/multinode-540000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/multinode-540000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:61:07:48:f6:83 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/multinode-540000/disk.qcow2
	I0819 11:15:01.033522   13970 main.go:141] libmachine: STDOUT: 
	I0819 11:15:01.033542   13970 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:15:01.033568   13970 fix.go:56] duration metric: took 14.706833ms for fixHost
	I0819 11:15:01.033575   13970 start.go:83] releasing machines lock for "multinode-540000", held for 14.725625ms
	W0819 11:15:01.033581   13970 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:15:01.033617   13970 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:15:01.033622   13970 start.go:729] Will try again in 5 seconds ...
	I0819 11:15:06.035728   13970 start.go:360] acquireMachinesLock for multinode-540000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:15:06.036150   13970 start.go:364] duration metric: took 329.5µs to acquireMachinesLock for "multinode-540000"
	I0819 11:15:06.036278   13970 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:15:06.036299   13970 fix.go:54] fixHost starting: 
	I0819 11:15:06.037104   13970 fix.go:112] recreateIfNeeded on multinode-540000: state=Stopped err=<nil>
	W0819 11:15:06.037136   13970 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:15:06.041686   13970 out.go:177] * Restarting existing qemu2 VM for "multinode-540000" ...
	I0819 11:15:06.049609   13970 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:15:06.049845   13970 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/multinode-540000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/multinode-540000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/multinode-540000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:61:07:48:f6:83 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/multinode-540000/disk.qcow2
	I0819 11:15:06.058998   13970 main.go:141] libmachine: STDOUT: 
	I0819 11:15:06.059081   13970 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:15:06.059152   13970 fix.go:56] duration metric: took 22.855375ms for fixHost
	I0819 11:15:06.059170   13970 start.go:83] releasing machines lock for "multinode-540000", held for 22.998459ms
	W0819 11:15:06.059356   13970 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-540000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-540000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:15:06.066690   13970 out.go:201] 
	W0819 11:15:06.070688   13970 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:15:06.070714   13970 out.go:270] * 
	* 
	W0819 11:15:06.073666   13970 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:15:06.080622   13970 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-540000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-540000 -n multinode-540000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-540000 -n multinode-540000: exit status 7 (68.779584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-540000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-540000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-540000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-540000-m01 --driver=qemu2 : exit status 80 (9.979512917s)

                                                
                                                
-- stdout --
	* [multinode-540000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-540000-m01" primary control-plane node in "multinode-540000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-540000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-540000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-540000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-540000-m02 --driver=qemu2 : exit status 80 (10.202333792s)

                                                
                                                
-- stdout --
	* [multinode-540000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-540000-m02" primary control-plane node in "multinode-540000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-540000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-540000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-540000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-540000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-540000: exit status 83 (79.350666ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-540000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-540000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-540000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-540000 -n multinode-540000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-540000 -n multinode-540000: exit status 7 (31.70875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-540000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.41s)

                                                
                                    
x
+
TestPreload (10.09s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-233000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-233000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.94739675s)

                                                
                                                
-- stdout --
	* [test-preload-233000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-233000" primary control-plane node in "test-preload-233000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-233000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:15:26.714112   14038 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:15:26.714247   14038 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:15:26.714251   14038 out.go:358] Setting ErrFile to fd 2...
	I0819 11:15:26.714254   14038 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:15:26.714382   14038 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:15:26.715430   14038 out.go:352] Setting JSON to false
	I0819 11:15:26.731216   14038 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6293,"bootTime":1724085033,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:15:26.731298   14038 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:15:26.737359   14038 out.go:177] * [test-preload-233000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:15:26.744317   14038 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 11:15:26.744379   14038 notify.go:220] Checking for updates...
	I0819 11:15:26.751292   14038 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:15:26.754265   14038 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:15:26.757306   14038 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:15:26.760235   14038 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	I0819 11:15:26.763307   14038 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:15:26.766576   14038 config.go:182] Loaded profile config "multinode-540000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:15:26.766632   14038 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:15:26.771210   14038 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:15:26.778343   14038 start.go:297] selected driver: qemu2
	I0819 11:15:26.778351   14038 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:15:26.778361   14038 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:15:26.780568   14038 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:15:26.784239   14038 out.go:177] * Automatically selected the socket_vmnet network
	I0819 11:15:26.787346   14038 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:15:26.787368   14038 cni.go:84] Creating CNI manager for ""
	I0819 11:15:26.787379   14038 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:15:26.787383   14038 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 11:15:26.787412   14038 start.go:340] cluster config:
	{Name:test-preload-233000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-233000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:15:26.791077   14038 iso.go:125] acquiring lock: {Name:mk1182fa87ba49f1e009b3ded77c456c9e9e8e4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:15:26.797303   14038 out.go:177] * Starting "test-preload-233000" primary control-plane node in "test-preload-233000" cluster
	I0819 11:15:26.801238   14038 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0819 11:15:26.801315   14038 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/test-preload-233000/config.json ...
	I0819 11:15:26.801330   14038 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/test-preload-233000/config.json: {Name:mkedd1c5262137777deb6f8c5d6457828afd50d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:15:26.801336   14038 cache.go:107] acquiring lock: {Name:mkcdb77e9d2db010ca1e12358ad545390db01839 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:15:26.801343   14038 cache.go:107] acquiring lock: {Name:mk002a9cfd316b7d04ab60e64393b097c984da8e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:15:26.801355   14038 cache.go:107] acquiring lock: {Name:mk9b9a8a471a9ad482aa0d636a95e80fbaeddc30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:15:26.801336   14038 cache.go:107] acquiring lock: {Name:mk64a0dbd086912bce1440b78e0aa5d0cfe1f816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:15:26.801513   14038 cache.go:107] acquiring lock: {Name:mk19685215d96d8fba32d0f90fc418dec16713d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:15:26.801546   14038 cache.go:107] acquiring lock: {Name:mkf1322ac19caa3df4299bec9b6f97f602f2c8a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:15:26.801540   14038 cache.go:107] acquiring lock: {Name:mk7d2a739c754d29f749dee5661a7b4a993ced87 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:15:26.801663   14038 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0819 11:15:26.801689   14038 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0819 11:15:26.801628   14038 cache.go:107] acquiring lock: {Name:mka511c987499459579b55c977104daa66afbecb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:15:26.801753   14038 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0819 11:15:26.801758   14038 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0819 11:15:26.801767   14038 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0819 11:15:26.801722   14038 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0819 11:15:26.801885   14038 start.go:360] acquireMachinesLock for test-preload-233000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:15:26.801930   14038 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 11:15:26.801938   14038 start.go:364] duration metric: took 41µs to acquireMachinesLock for "test-preload-233000"
	I0819 11:15:26.801952   14038 start.go:93] Provisioning new machine with config: &{Name:test-preload-233000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-233000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:15:26.802003   14038 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:15:26.802034   14038 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:15:26.810097   14038 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 11:15:26.814281   14038 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0819 11:15:26.814318   14038 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0819 11:15:26.814553   14038 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0819 11:15:26.814953   14038 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0819 11:15:26.817113   14038 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 11:15:26.817137   14038 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:15:26.817123   14038 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0819 11:15:26.817154   14038 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0819 11:15:26.828596   14038 start.go:159] libmachine.API.Create for "test-preload-233000" (driver="qemu2")
	I0819 11:15:26.828621   14038 client.go:168] LocalClient.Create starting
	I0819 11:15:26.828741   14038 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:15:26.828779   14038 main.go:141] libmachine: Decoding PEM data...
	I0819 11:15:26.828792   14038 main.go:141] libmachine: Parsing certificate...
	I0819 11:15:26.828835   14038 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:15:26.828859   14038 main.go:141] libmachine: Decoding PEM data...
	I0819 11:15:26.828866   14038 main.go:141] libmachine: Parsing certificate...
	I0819 11:15:26.829283   14038 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:15:27.001318   14038 main.go:141] libmachine: Creating SSH key...
	I0819 11:15:27.120166   14038 main.go:141] libmachine: Creating Disk image...
	I0819 11:15:27.120187   14038 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:15:27.120435   14038 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/test-preload-233000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/test-preload-233000/disk.qcow2
	I0819 11:15:27.130700   14038 main.go:141] libmachine: STDOUT: 
	I0819 11:15:27.130716   14038 main.go:141] libmachine: STDERR: 
	I0819 11:15:27.130766   14038 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/test-preload-233000/disk.qcow2 +20000M
	I0819 11:15:27.138974   14038 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:15:27.138988   14038 main.go:141] libmachine: STDERR: 
	I0819 11:15:27.139002   14038 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/test-preload-233000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/test-preload-233000/disk.qcow2
	I0819 11:15:27.139005   14038 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:15:27.139020   14038 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:15:27.139049   14038 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/test-preload-233000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/test-preload-233000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/test-preload-233000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:64:ac:84:c3:35 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/test-preload-233000/disk.qcow2
	I0819 11:15:27.140890   14038 main.go:141] libmachine: STDOUT: 
	I0819 11:15:27.140915   14038 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:15:27.140942   14038 client.go:171] duration metric: took 312.31875ms to LocalClient.Create
	I0819 11:15:27.221882   14038 cache.go:162] opening:  /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0819 11:15:27.226896   14038 cache.go:162] opening:  /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0819 11:15:27.227011   14038 cache.go:162] opening:  /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0819 11:15:27.248617   14038 cache.go:162] opening:  /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0819 11:15:27.303188   14038 cache.go:162] opening:  /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	W0819 11:15:27.346612   14038 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0819 11:15:27.346691   14038 cache.go:162] opening:  /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0819 11:15:27.352963   14038 cache.go:162] opening:  /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0819 11:15:27.385607   14038 cache.go:157] /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0819 11:15:27.385647   14038 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 584.153125ms
	I0819 11:15:27.385691   14038 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0819 11:15:27.843034   14038 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0819 11:15:27.843104   14038 cache.go:162] opening:  /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0819 11:15:28.168279   14038 cache.go:157] /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0819 11:15:28.168341   14038 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.367008791s
	I0819 11:15:28.168364   14038 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0819 11:15:29.141206   14038 start.go:128] duration metric: took 2.339192084s to createHost
	I0819 11:15:29.141247   14038 start.go:83] releasing machines lock for "test-preload-233000", held for 2.339314875s
	W0819 11:15:29.141293   14038 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:15:29.150852   14038 out.go:177] * Deleting "test-preload-233000" in qemu2 ...
	W0819 11:15:29.179936   14038 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:15:29.179971   14038 start.go:729] Will try again in 5 seconds ...
	I0819 11:15:29.605587   14038 cache.go:157] /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0819 11:15:29.605640   14038 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.804036083s
	I0819 11:15:29.605693   14038 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0819 11:15:30.824419   14038 cache.go:157] /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0819 11:15:30.824527   14038 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 4.023183917s
	I0819 11:15:30.824559   14038 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0819 11:15:31.399056   14038 cache.go:157] /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0819 11:15:31.399114   14038 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.597791666s
	I0819 11:15:31.399141   14038 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0819 11:15:31.521690   14038 cache.go:157] /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0819 11:15:31.521739   14038 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.720431917s
	I0819 11:15:31.521763   14038 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0819 11:15:32.408851   14038 cache.go:157] /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0819 11:15:32.409305   14038 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.607400333s
	I0819 11:15:32.409782   14038 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0819 11:15:34.180225   14038 start.go:360] acquireMachinesLock for test-preload-233000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:15:34.180675   14038 start.go:364] duration metric: took 363.209µs to acquireMachinesLock for "test-preload-233000"
	I0819 11:15:34.180823   14038 start.go:93] Provisioning new machine with config: &{Name:test-preload-233000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-233000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:15:34.181194   14038 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:15:34.187887   14038 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 11:15:34.238740   14038 start.go:159] libmachine.API.Create for "test-preload-233000" (driver="qemu2")
	I0819 11:15:34.238945   14038 client.go:168] LocalClient.Create starting
	I0819 11:15:34.239094   14038 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:15:34.239160   14038 main.go:141] libmachine: Decoding PEM data...
	I0819 11:15:34.239184   14038 main.go:141] libmachine: Parsing certificate...
	I0819 11:15:34.239262   14038 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:15:34.239307   14038 main.go:141] libmachine: Decoding PEM data...
	I0819 11:15:34.239324   14038 main.go:141] libmachine: Parsing certificate...
	I0819 11:15:34.239881   14038 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:15:34.402845   14038 main.go:141] libmachine: Creating SSH key...
	I0819 11:15:34.562303   14038 main.go:141] libmachine: Creating Disk image...
	I0819 11:15:34.562313   14038 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:15:34.562546   14038 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/test-preload-233000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/test-preload-233000/disk.qcow2
	I0819 11:15:34.572299   14038 main.go:141] libmachine: STDOUT: 
	I0819 11:15:34.572319   14038 main.go:141] libmachine: STDERR: 
	I0819 11:15:34.572369   14038 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/test-preload-233000/disk.qcow2 +20000M
	I0819 11:15:34.580277   14038 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:15:34.580291   14038 main.go:141] libmachine: STDERR: 
	I0819 11:15:34.580303   14038 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/test-preload-233000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/test-preload-233000/disk.qcow2
	I0819 11:15:34.580308   14038 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:15:34.580330   14038 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:15:34.580372   14038 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/test-preload-233000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/test-preload-233000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/test-preload-233000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:e5:77:e1:f6:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/test-preload-233000/disk.qcow2
	I0819 11:15:34.582047   14038 main.go:141] libmachine: STDOUT: 
	I0819 11:15:34.582062   14038 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:15:34.582074   14038 client.go:171] duration metric: took 343.124833ms to LocalClient.Create
	I0819 11:15:35.288957   14038 cache.go:157] /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0819 11:15:35.289042   14038 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 8.487567625s
	I0819 11:15:35.289072   14038 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0819 11:15:35.289129   14038 cache.go:87] Successfully saved all images to host disk.
	I0819 11:15:36.584291   14038 start.go:128] duration metric: took 2.403036333s to createHost
	I0819 11:15:36.584348   14038 start.go:83] releasing machines lock for "test-preload-233000", held for 2.403656959s
	W0819 11:15:36.584663   14038 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-233000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-233000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:15:36.599458   14038 out.go:201] 
	W0819 11:15:36.604399   14038 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:15:36.604425   14038 out.go:270] * 
	* 
	W0819 11:15:36.607168   14038 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:15:36.618312   14038 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-233000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-08-19 11:15:36.635857 -0700 PDT m=+605.893325584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-233000 -n test-preload-233000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-233000 -n test-preload-233000: exit status 7 (65.605708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-233000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-233000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-233000
--- FAIL: TestPreload (10.09s)

                                                
                                    
x
+
TestScheduledStopUnix (10.02s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-524000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-524000 --memory=2048 --driver=qemu2 : exit status 80 (9.870626459s)

                                                
                                                
-- stdout --
	* [scheduled-stop-524000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-524000" primary control-plane node in "scheduled-stop-524000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-524000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-524000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-524000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-524000" primary control-plane node in "scheduled-stop-524000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-524000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-524000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-08-19 11:15:46.650809 -0700 PDT m=+615.908328959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-524000 -n scheduled-stop-524000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-524000 -n scheduled-stop-524000: exit status 7 (71.569041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-524000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-524000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-524000
--- FAIL: TestScheduledStopUnix (10.02s)

                                                
                                    
x
+
TestSkaffold (12.24s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe44049166 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe44049166 version: (1.058983125s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-414000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-414000 --memory=2600 --driver=qemu2 : exit status 80 (9.760752709s)

                                                
                                                
-- stdout --
	* [skaffold-414000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-414000" primary control-plane node in "skaffold-414000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-414000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-414000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-414000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-414000" primary control-plane node in "skaffold-414000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-414000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-414000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-08-19 11:15:58.894734 -0700 PDT m=+628.152316709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-414000 -n skaffold-414000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-414000 -n skaffold-414000: exit status 7 (62.83575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-414000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-414000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-414000
--- FAIL: TestSkaffold (12.24s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (598.05s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3930351684 start -p running-upgrade-015000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3930351684 start -p running-upgrade-015000 --memory=2200 --vm-driver=qemu2 : (51.222029s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-015000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-015000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m32.783930375s)

                                                
                                                
-- stdout --
	* [running-upgrade-015000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-015000" primary control-plane node in "running-upgrade-015000" cluster
	* Updating the running qemu2 "running-upgrade-015000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:17:32.202034   14497 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:17:32.202154   14497 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:17:32.202157   14497 out.go:358] Setting ErrFile to fd 2...
	I0819 11:17:32.202160   14497 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:17:32.202275   14497 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:17:32.203478   14497 out.go:352] Setting JSON to false
	I0819 11:17:32.219738   14497 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6419,"bootTime":1724085033,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:17:32.219819   14497 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:17:32.223985   14497 out.go:177] * [running-upgrade-015000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:17:32.231088   14497 notify.go:220] Checking for updates...
	I0819 11:17:32.234016   14497 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 11:17:32.237968   14497 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:17:32.241030   14497 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:17:32.244058   14497 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:17:32.253045   14497 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	I0819 11:17:32.260959   14497 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:17:32.264262   14497 config.go:182] Loaded profile config "running-upgrade-015000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 11:17:32.268011   14497 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0819 11:17:32.270992   14497 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:17:32.275031   14497 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 11:17:32.281972   14497 start.go:297] selected driver: qemu2
	I0819 11:17:32.281976   14497 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-015000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52176 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-015000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0819 11:17:32.282023   14497 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:17:32.284274   14497 cni.go:84] Creating CNI manager for ""
	I0819 11:17:32.284289   14497 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:17:32.284313   14497 start.go:340] cluster config:
	{Name:running-upgrade-015000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52176 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-015000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0819 11:17:32.284362   14497 iso.go:125] acquiring lock: {Name:mk1182fa87ba49f1e009b3ded77c456c9e9e8e4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:17:32.290957   14497 out.go:177] * Starting "running-upgrade-015000" primary control-plane node in "running-upgrade-015000" cluster
	I0819 11:17:32.294978   14497 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0819 11:17:32.294991   14497 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0819 11:17:32.295005   14497 cache.go:56] Caching tarball of preloaded images
	I0819 11:17:32.295060   14497 preload.go:172] Found /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:17:32.295071   14497 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0819 11:17:32.295118   14497 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/running-upgrade-015000/config.json ...
	I0819 11:17:32.295531   14497 start.go:360] acquireMachinesLock for running-upgrade-015000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:17:32.295564   14497 start.go:364] duration metric: took 26.833µs to acquireMachinesLock for "running-upgrade-015000"
	I0819 11:17:32.295573   14497 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:17:32.295579   14497 fix.go:54] fixHost starting: 
	I0819 11:17:32.296174   14497 fix.go:112] recreateIfNeeded on running-upgrade-015000: state=Running err=<nil>
	W0819 11:17:32.296181   14497 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:17:32.300026   14497 out.go:177] * Updating the running qemu2 "running-upgrade-015000" VM ...
	I0819 11:17:32.308042   14497 machine.go:93] provisionDockerMachine start ...
	I0819 11:17:32.308074   14497 main.go:141] libmachine: Using SSH client type: native
	I0819 11:17:32.308185   14497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1025605a0] 0x102562e00 <nil>  [] 0s} localhost 52144 <nil> <nil>}
	I0819 11:17:32.308189   14497 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 11:17:32.377343   14497 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-015000
	
	I0819 11:17:32.377355   14497 buildroot.go:166] provisioning hostname "running-upgrade-015000"
	I0819 11:17:32.377413   14497 main.go:141] libmachine: Using SSH client type: native
	I0819 11:17:32.377517   14497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1025605a0] 0x102562e00 <nil>  [] 0s} localhost 52144 <nil> <nil>}
	I0819 11:17:32.377525   14497 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-015000 && echo "running-upgrade-015000" | sudo tee /etc/hostname
	I0819 11:17:32.450772   14497 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-015000
	
	I0819 11:17:32.450827   14497 main.go:141] libmachine: Using SSH client type: native
	I0819 11:17:32.450947   14497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1025605a0] 0x102562e00 <nil>  [] 0s} localhost 52144 <nil> <nil>}
	I0819 11:17:32.450956   14497 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-015000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-015000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-015000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 11:17:32.523217   14497 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 11:17:32.523228   14497 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19468-11838/.minikube CaCertPath:/Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19468-11838/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19468-11838/.minikube}
	I0819 11:17:32.523240   14497 buildroot.go:174] setting up certificates
	I0819 11:17:32.523247   14497 provision.go:84] configureAuth start
	I0819 11:17:32.523254   14497 provision.go:143] copyHostCerts
	I0819 11:17:32.523337   14497 exec_runner.go:144] found /Users/jenkins/minikube-integration/19468-11838/.minikube/key.pem, removing ...
	I0819 11:17:32.523342   14497 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19468-11838/.minikube/key.pem
	I0819 11:17:32.523466   14497 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19468-11838/.minikube/key.pem (1675 bytes)
	I0819 11:17:32.523651   14497 exec_runner.go:144] found /Users/jenkins/minikube-integration/19468-11838/.minikube/ca.pem, removing ...
	I0819 11:17:32.523655   14497 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19468-11838/.minikube/ca.pem
	I0819 11:17:32.523711   14497 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19468-11838/.minikube/ca.pem (1082 bytes)
	I0819 11:17:32.523817   14497 exec_runner.go:144] found /Users/jenkins/minikube-integration/19468-11838/.minikube/cert.pem, removing ...
	I0819 11:17:32.523821   14497 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19468-11838/.minikube/cert.pem
	I0819 11:17:32.523868   14497 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19468-11838/.minikube/cert.pem (1123 bytes)
	I0819 11:17:32.523962   14497 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-015000 san=[127.0.0.1 localhost minikube running-upgrade-015000]
	I0819 11:17:32.571576   14497 provision.go:177] copyRemoteCerts
	I0819 11:17:32.571605   14497 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 11:17:32.571611   14497 sshutil.go:53] new ssh client: &{IP:localhost Port:52144 SSHKeyPath:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/running-upgrade-015000/id_rsa Username:docker}
	I0819 11:17:32.610289   14497 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I0819 11:17:32.616713   14497 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 11:17:32.625292   14497 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0819 11:17:32.632070   14497 provision.go:87] duration metric: took 108.816583ms to configureAuth
	I0819 11:17:32.632079   14497 buildroot.go:189] setting minikube options for container-runtime
	I0819 11:17:32.632191   14497 config.go:182] Loaded profile config "running-upgrade-015000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 11:17:32.632231   14497 main.go:141] libmachine: Using SSH client type: native
	I0819 11:17:32.632323   14497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1025605a0] 0x102562e00 <nil>  [] 0s} localhost 52144 <nil> <nil>}
	I0819 11:17:32.632328   14497 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 11:17:32.704896   14497 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 11:17:32.704908   14497 buildroot.go:70] root file system type: tmpfs
	I0819 11:17:32.704950   14497 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 11:17:32.705018   14497 main.go:141] libmachine: Using SSH client type: native
	I0819 11:17:32.705133   14497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1025605a0] 0x102562e00 <nil>  [] 0s} localhost 52144 <nil> <nil>}
	I0819 11:17:32.705166   14497 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 11:17:32.783556   14497 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 11:17:32.783613   14497 main.go:141] libmachine: Using SSH client type: native
	I0819 11:17:32.783728   14497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1025605a0] 0x102562e00 <nil>  [] 0s} localhost 52144 <nil> <nil>}
	I0819 11:17:32.783739   14497 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 11:17:32.855837   14497 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 11:17:32.855849   14497 machine.go:96] duration metric: took 547.804083ms to provisionDockerMachine
	I0819 11:17:32.855855   14497 start.go:293] postStartSetup for "running-upgrade-015000" (driver="qemu2")
	I0819 11:17:32.855862   14497 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 11:17:32.855905   14497 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 11:17:32.855914   14497 sshutil.go:53] new ssh client: &{IP:localhost Port:52144 SSHKeyPath:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/running-upgrade-015000/id_rsa Username:docker}
	I0819 11:17:32.893580   14497 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 11:17:32.894918   14497 info.go:137] Remote host: Buildroot 2021.02.12
	I0819 11:17:32.894927   14497 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19468-11838/.minikube/addons for local assets ...
	I0819 11:17:32.895008   14497 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19468-11838/.minikube/files for local assets ...
	I0819 11:17:32.895138   14497 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19468-11838/.minikube/files/etc/ssl/certs/123172.pem -> 123172.pem in /etc/ssl/certs
	I0819 11:17:32.895279   14497 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 11:17:32.898052   14497 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/files/etc/ssl/certs/123172.pem --> /etc/ssl/certs/123172.pem (1708 bytes)
	I0819 11:17:32.904778   14497 start.go:296] duration metric: took 48.918458ms for postStartSetup
	I0819 11:17:32.904793   14497 fix.go:56] duration metric: took 609.219291ms for fixHost
	I0819 11:17:32.904823   14497 main.go:141] libmachine: Using SSH client type: native
	I0819 11:17:32.904920   14497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1025605a0] 0x102562e00 <nil>  [] 0s} localhost 52144 <nil> <nil>}
	I0819 11:17:32.904924   14497 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 11:17:32.977563   14497 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724091453.142777555
	
	I0819 11:17:32.977570   14497 fix.go:216] guest clock: 1724091453.142777555
	I0819 11:17:32.977574   14497 fix.go:229] Guest: 2024-08-19 11:17:33.142777555 -0700 PDT Remote: 2024-08-19 11:17:32.904794 -0700 PDT m=+0.723023418 (delta=237.983555ms)
	I0819 11:17:32.977584   14497 fix.go:200] guest clock delta is within tolerance: 237.983555ms
	I0819 11:17:32.977586   14497 start.go:83] releasing machines lock for "running-upgrade-015000", held for 682.022ms
	I0819 11:17:32.977648   14497 ssh_runner.go:195] Run: cat /version.json
	I0819 11:17:32.977651   14497 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 11:17:32.977658   14497 sshutil.go:53] new ssh client: &{IP:localhost Port:52144 SSHKeyPath:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/running-upgrade-015000/id_rsa Username:docker}
	I0819 11:17:32.977666   14497 sshutil.go:53] new ssh client: &{IP:localhost Port:52144 SSHKeyPath:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/running-upgrade-015000/id_rsa Username:docker}
	W0819 11:17:32.978234   14497 sshutil.go:64] dial failure (will retry): dial tcp [::1]:52144: connect: connection refused
	I0819 11:17:32.978259   14497 retry.go:31] will retry after 184.267871ms: dial tcp [::1]:52144: connect: connection refused
	W0819 11:17:33.012923   14497 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0819 11:17:33.012976   14497 ssh_runner.go:195] Run: systemctl --version
	I0819 11:17:33.014788   14497 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 11:17:33.016455   14497 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 11:17:33.016489   14497 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0819 11:17:33.019578   14497 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0819 11:17:33.028604   14497 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 11:17:33.028618   14497 start.go:495] detecting cgroup driver to use...
	I0819 11:17:33.028722   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 11:17:33.033883   14497 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0819 11:17:33.036634   14497 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 11:17:33.039859   14497 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 11:17:33.039877   14497 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 11:17:33.042631   14497 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 11:17:33.045678   14497 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 11:17:33.048527   14497 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 11:17:33.051830   14497 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 11:17:33.055281   14497 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 11:17:33.058019   14497 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 11:17:33.061095   14497 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 11:17:33.063897   14497 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 11:17:33.066583   14497 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 11:17:33.069032   14497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:17:33.158468   14497 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 11:17:33.171663   14497 start.go:495] detecting cgroup driver to use...
	I0819 11:17:33.171733   14497 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 11:17:33.178649   14497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 11:17:33.183600   14497 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 11:17:33.191381   14497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 11:17:33.195943   14497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 11:17:33.200888   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 11:17:33.247325   14497 ssh_runner.go:195] Run: which cri-dockerd
	I0819 11:17:33.248597   14497 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 11:17:33.251370   14497 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0819 11:17:33.256367   14497 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 11:17:33.348058   14497 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 11:17:33.439143   14497 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 11:17:33.439210   14497 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 11:17:33.444743   14497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:17:33.532335   14497 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 11:17:46.168330   14497 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.636041125s)
	I0819 11:17:46.168401   14497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0819 11:17:46.174394   14497 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0819 11:17:46.182929   14497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 11:17:46.188215   14497 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0819 11:17:46.285556   14497 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0819 11:17:46.362629   14497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:17:46.443197   14497 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0819 11:17:46.448954   14497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 11:17:46.453544   14497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:17:46.536939   14497 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0819 11:17:46.574742   14497 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0819 11:17:46.574823   14497 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0819 11:17:46.576992   14497 start.go:563] Will wait 60s for crictl version
	I0819 11:17:46.577038   14497 ssh_runner.go:195] Run: which crictl
	I0819 11:17:46.578450   14497 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 11:17:46.594541   14497 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0819 11:17:46.594611   14497 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 11:17:46.606842   14497 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 11:17:46.623245   14497 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0819 11:17:46.623373   14497 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0819 11:17:46.624756   14497 kubeadm.go:883] updating cluster {Name:running-upgrade-015000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52176 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-015000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0819 11:17:46.624805   14497 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0819 11:17:46.624840   14497 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 11:17:46.638124   14497 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0819 11:17:46.638134   14497 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0819 11:17:46.638181   14497 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0819 11:17:46.641104   14497 ssh_runner.go:195] Run: which lz4
	I0819 11:17:46.642450   14497 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 11:17:46.643714   14497 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 11:17:46.643725   14497 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0819 11:17:47.489189   14497 docker.go:649] duration metric: took 846.78375ms to copy over tarball
	I0819 11:17:47.489245   14497 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 11:17:48.750356   14497 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.261104791s)
	I0819 11:17:48.750370   14497 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 11:17:48.765971   14497 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0819 11:17:48.768889   14497 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0819 11:17:48.774112   14497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:17:48.852870   14497 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 11:17:50.051533   14497 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.198653167s)
	I0819 11:17:50.051611   14497 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 11:17:50.065020   14497 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0819 11:17:50.065031   14497 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0819 11:17:50.065036   14497 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 11:17:50.071442   14497 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:17:50.073301   14497 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0819 11:17:50.074439   14497 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:17:50.075307   14497 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0819 11:17:50.075955   14497 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 11:17:50.077368   14497 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0819 11:17:50.077535   14497 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0819 11:17:50.078835   14497 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0819 11:17:50.079218   14497 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 11:17:50.079253   14497 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0819 11:17:50.080357   14497 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0819 11:17:50.080384   14497 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0819 11:17:50.081099   14497 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 11:17:50.081780   14497 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0819 11:17:50.082140   14497 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0819 11:17:50.083118   14497 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 11:17:50.475238   14497 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0819 11:17:50.488710   14497 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0819 11:17:50.488736   14497 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0819 11:17:50.488795   14497 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0819 11:17:50.502812   14497 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0819 11:17:50.520857   14497 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0819 11:17:50.526152   14497 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0819 11:17:50.530157   14497 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 11:17:50.531419   14497 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0819 11:17:50.534311   14497 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0819 11:17:50.534329   14497 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0819 11:17:50.534365   14497 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0819 11:17:50.540464   14497 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0819 11:17:50.543185   14497 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0819 11:17:50.543204   14497 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0819 11:17:50.543242   14497 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	W0819 11:17:50.564856   14497 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0819 11:17:50.564994   14497 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0819 11:17:50.574067   14497 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0819 11:17:50.574088   14497 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 11:17:50.574143   14497 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 11:17:50.574697   14497 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0819 11:17:50.574710   14497 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0819 11:17:50.574736   14497 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0819 11:17:50.585882   14497 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0819 11:17:50.585948   14497 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0819 11:17:50.585963   14497 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0819 11:17:50.585976   14497 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0819 11:17:50.586008   14497 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0819 11:17:50.591196   14497 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0819 11:17:50.591215   14497 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 11:17:50.591266   14497 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0819 11:17:50.596400   14497 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0819 11:17:50.598277   14497 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0819 11:17:50.599371   14497 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0819 11:17:50.607816   14497 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0819 11:17:50.607951   14497 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0819 11:17:50.615352   14497 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0819 11:17:50.615390   14497 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0819 11:17:50.615400   14497 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0819 11:17:50.615412   14497 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0819 11:17:50.615424   14497 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0819 11:17:50.615449   14497 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0819 11:17:50.616956   14497 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0819 11:17:50.616965   14497 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0819 11:17:50.632554   14497 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0819 11:17:50.632571   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0819 11:17:50.736407   14497 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0819 11:17:50.736433   14497 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0819 11:17:50.736455   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0819 11:17:50.875904   14497 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0819 11:17:50.904601   14497 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0819 11:17:50.904717   14497 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:17:50.941801   14497 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0819 11:17:50.941823   14497 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:17:50.941884   14497 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:17:50.943874   14497 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0819 11:17:50.943884   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0819 11:17:51.511194   14497 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0819 11:17:51.511268   14497 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0819 11:17:51.511671   14497 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0819 11:17:51.516407   14497 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0819 11:17:51.516477   14497 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0819 11:17:51.578985   14497 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0819 11:17:51.579000   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0819 11:17:51.812135   14497 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0819 11:17:51.812173   14497 cache_images.go:92] duration metric: took 1.7471395s to LoadCachedImages
	W0819 11:17:51.812213   14497 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0819 11:17:51.812217   14497 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0819 11:17:51.812290   14497 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-015000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-015000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 11:17:51.812347   14497 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0819 11:17:51.826086   14497 cni.go:84] Creating CNI manager for ""
	I0819 11:17:51.826097   14497 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:17:51.826107   14497 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 11:17:51.826115   14497 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-015000 NodeName:running-upgrade-015000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 11:17:51.826183   14497 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-015000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 11:17:51.826240   14497 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0819 11:17:51.829528   14497 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 11:17:51.829557   14497 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 11:17:51.832727   14497 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0819 11:17:51.837828   14497 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 11:17:51.842488   14497 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0819 11:17:51.847684   14497 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0819 11:17:51.849142   14497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:17:51.935545   14497 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 11:17:51.940599   14497 certs.go:68] Setting up /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/running-upgrade-015000 for IP: 10.0.2.15
	I0819 11:17:51.940605   14497 certs.go:194] generating shared ca certs ...
	I0819 11:17:51.940613   14497 certs.go:226] acquiring lock for ca certs: {Name:mka749b3c39f634f903dfb76b75647518084e393 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:17:51.940858   14497 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19468-11838/.minikube/ca.key
	I0819 11:17:51.940907   14497 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19468-11838/.minikube/proxy-client-ca.key
	I0819 11:17:51.940911   14497 certs.go:256] generating profile certs ...
	I0819 11:17:51.940967   14497 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/running-upgrade-015000/client.key
	I0819 11:17:51.940979   14497 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/running-upgrade-015000/apiserver.key.e314ccdc
	I0819 11:17:51.940990   14497 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/running-upgrade-015000/apiserver.crt.e314ccdc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0819 11:17:52.112038   14497 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/running-upgrade-015000/apiserver.crt.e314ccdc ...
	I0819 11:17:52.112054   14497 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/running-upgrade-015000/apiserver.crt.e314ccdc: {Name:mkdf5ece32cd374e896aa4c14f0c07bbe8ff07a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:17:52.112376   14497 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/running-upgrade-015000/apiserver.key.e314ccdc ...
	I0819 11:17:52.112383   14497 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/running-upgrade-015000/apiserver.key.e314ccdc: {Name:mk1167af0c33a526bef41497158773328127e8a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:17:52.112528   14497 certs.go:381] copying /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/running-upgrade-015000/apiserver.crt.e314ccdc -> /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/running-upgrade-015000/apiserver.crt
	I0819 11:17:52.112664   14497 certs.go:385] copying /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/running-upgrade-015000/apiserver.key.e314ccdc -> /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/running-upgrade-015000/apiserver.key
	I0819 11:17:52.112824   14497 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/running-upgrade-015000/proxy-client.key
	I0819 11:17:52.112951   14497 certs.go:484] found cert: /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/12317.pem (1338 bytes)
	W0819 11:17:52.112979   14497 certs.go:480] ignoring /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/12317_empty.pem, impossibly tiny 0 bytes
	I0819 11:17:52.112985   14497 certs.go:484] found cert: /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 11:17:52.113005   14497 certs.go:484] found cert: /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem (1082 bytes)
	I0819 11:17:52.113024   14497 certs.go:484] found cert: /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem (1123 bytes)
	I0819 11:17:52.113041   14497 certs.go:484] found cert: /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/key.pem (1675 bytes)
	I0819 11:17:52.113079   14497 certs.go:484] found cert: /Users/jenkins/minikube-integration/19468-11838/.minikube/files/etc/ssl/certs/123172.pem (1708 bytes)
	I0819 11:17:52.113499   14497 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 11:17:52.121193   14497 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 11:17:52.128666   14497 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 11:17:52.135471   14497 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 11:17:52.142964   14497 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/running-upgrade-015000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0819 11:17:52.150096   14497 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/running-upgrade-015000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 11:17:52.156861   14497 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/running-upgrade-015000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 11:17:52.163559   14497 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/running-upgrade-015000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 11:17:52.170900   14497 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/files/etc/ssl/certs/123172.pem --> /usr/share/ca-certificates/123172.pem (1708 bytes)
	I0819 11:17:52.177785   14497 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 11:17:52.184528   14497 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/12317.pem --> /usr/share/ca-certificates/12317.pem (1338 bytes)
	I0819 11:17:52.191485   14497 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 11:17:52.196829   14497 ssh_runner.go:195] Run: openssl version
	I0819 11:17:52.198576   14497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 11:17:52.201795   14497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:17:52.203270   14497 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:17:52.203300   14497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:17:52.205062   14497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 11:17:52.207785   14497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12317.pem && ln -fs /usr/share/ca-certificates/12317.pem /etc/ssl/certs/12317.pem"
	I0819 11:17:52.210825   14497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12317.pem
	I0819 11:17:52.212158   14497 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 18:06 /usr/share/ca-certificates/12317.pem
	I0819 11:17:52.212182   14497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12317.pem
	I0819 11:17:52.213897   14497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12317.pem /etc/ssl/certs/51391683.0"
	I0819 11:17:52.216442   14497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/123172.pem && ln -fs /usr/share/ca-certificates/123172.pem /etc/ssl/certs/123172.pem"
	I0819 11:17:52.220014   14497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123172.pem
	I0819 11:17:52.221569   14497 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 18:06 /usr/share/ca-certificates/123172.pem
	I0819 11:17:52.221587   14497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123172.pem
	I0819 11:17:52.223325   14497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/123172.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 11:17:52.226390   14497 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 11:17:52.227830   14497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 11:17:52.229770   14497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 11:17:52.231412   14497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 11:17:52.233307   14497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 11:17:52.235252   14497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 11:17:52.237066   14497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 11:17:52.238815   14497 kubeadm.go:392] StartCluster: {Name:running-upgrade-015000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52176 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-015000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0819 11:17:52.238873   14497 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0819 11:17:52.249858   14497 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 11:17:52.252904   14497 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 11:17:52.252911   14497 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 11:17:52.252932   14497 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 11:17:52.256251   14497 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 11:17:52.256290   14497 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-015000" does not appear in /Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:17:52.256304   14497 kubeconfig.go:62] /Users/jenkins/minikube-integration/19468-11838/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-015000" cluster setting kubeconfig missing "running-upgrade-015000" context setting]
	I0819 11:17:52.256466   14497 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19468-11838/kubeconfig: {Name:mkf06e67426049c2259f6e46b5143872117d8aeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:17:52.257200   14497 kapi.go:59] client config for running-upgrade-015000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/running-upgrade-015000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/running-upgrade-015000/client.key", CAFile:"/Users/jenkins/minikube-integration/19468-11838/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103b1bd10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 11:17:52.258101   14497 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 11:17:52.260789   14497 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-015000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0819 11:17:52.260795   14497 kubeadm.go:1160] stopping kube-system containers ...
	I0819 11:17:52.260833   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0819 11:17:52.271958   14497 docker.go:483] Stopping containers: [d296675fa074 43462f14454f 73b6ea415881 f94b194fc3ad f13cc5a0e323 d3589e7f5cd4 d8442dadb356 4ff5b0dbd096 7d868800d6fe 83c0ac22cc21 89cb092cb057 b96fe6fada95 a3a4cd8e25f9]
	I0819 11:17:52.272026   14497 ssh_runner.go:195] Run: docker stop d296675fa074 43462f14454f 73b6ea415881 f94b194fc3ad f13cc5a0e323 d3589e7f5cd4 d8442dadb356 4ff5b0dbd096 7d868800d6fe 83c0ac22cc21 89cb092cb057 b96fe6fada95 a3a4cd8e25f9
	I0819 11:17:52.283665   14497 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 11:17:52.384470   14497 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 11:17:52.388557   14497 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Aug 19 18:17 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Aug 19 18:17 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Aug 19 18:17 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Aug 19 18:17 /etc/kubernetes/scheduler.conf
	
	I0819 11:17:52.388594   14497 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52176 /etc/kubernetes/admin.conf
	I0819 11:17:52.391913   14497 kubeadm.go:163] "https://control-plane.minikube.internal:52176" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52176 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0819 11:17:52.391943   14497 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 11:17:52.394825   14497 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52176 /etc/kubernetes/kubelet.conf
	I0819 11:17:52.397504   14497 kubeadm.go:163] "https://control-plane.minikube.internal:52176" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52176 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0819 11:17:52.397531   14497 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 11:17:52.400733   14497 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52176 /etc/kubernetes/controller-manager.conf
	I0819 11:17:52.403716   14497 kubeadm.go:163] "https://control-plane.minikube.internal:52176" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52176 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0819 11:17:52.403739   14497 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 11:17:52.406284   14497 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52176 /etc/kubernetes/scheduler.conf
	I0819 11:17:52.409054   14497 kubeadm.go:163] "https://control-plane.minikube.internal:52176" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52176 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0819 11:17:52.409076   14497 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 11:17:52.412318   14497 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 11:17:52.415133   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 11:17:52.436386   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 11:17:53.066698   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 11:17:53.262818   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 11:17:53.291960   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 11:17:53.318623   14497 api_server.go:52] waiting for apiserver process to appear ...
	I0819 11:17:53.318713   14497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:17:53.821035   14497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:17:54.320774   14497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:17:54.325204   14497 api_server.go:72] duration metric: took 1.006589125s to wait for apiserver process to appear ...
	I0819 11:17:54.325215   14497 api_server.go:88] waiting for apiserver healthz status ...
	I0819 11:17:54.325225   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:17:59.326466   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:17:59.326490   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:18:04.327336   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:18:04.327425   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:18:09.328335   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:18:09.328408   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:18:14.329162   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:18:14.329242   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:18:19.330607   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:18:19.330747   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:18:24.332304   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:18:24.332389   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:18:29.334431   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:18:29.334514   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:18:34.337133   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:18:34.337213   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:18:39.339876   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:18:39.339960   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:18:44.342533   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:18:44.342616   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:18:49.345311   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:18:49.345394   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:18:54.347984   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:18:54.348267   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:18:54.378066   14497 logs.go:276] 2 containers: [c75ea31785de 73b6ea415881]
	I0819 11:18:54.378181   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:18:54.392970   14497 logs.go:276] 2 containers: [0b6d1c937b20 89cb092cb057]
	I0819 11:18:54.393073   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:18:54.404654   14497 logs.go:276] 1 containers: [2c90137ecacc]
	I0819 11:18:54.404719   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:18:54.415078   14497 logs.go:276] 2 containers: [27115d75bfca d8442dadb356]
	I0819 11:18:54.415149   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:18:54.425240   14497 logs.go:276] 1 containers: [a32341ff1eda]
	I0819 11:18:54.425309   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:18:54.440278   14497 logs.go:276] 2 containers: [8908fc229d49 f94b194fc3ad]
	I0819 11:18:54.440348   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:18:54.449864   14497 logs.go:276] 0 containers: []
	W0819 11:18:54.449875   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:18:54.449930   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:18:54.460129   14497 logs.go:276] 2 containers: [2338601903cd 0e3b67602bd8]
	I0819 11:18:54.460147   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:18:54.460152   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:18:54.484945   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:18:54.484959   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:18:54.497715   14497 logs.go:123] Gathering logs for kube-apiserver [c75ea31785de] ...
	I0819 11:18:54.497728   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75ea31785de"
	I0819 11:18:54.511446   14497 logs.go:123] Gathering logs for storage-provisioner [2338601903cd] ...
	I0819 11:18:54.511459   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2338601903cd"
	I0819 11:18:54.523430   14497 logs.go:123] Gathering logs for kube-scheduler [d8442dadb356] ...
	I0819 11:18:54.523444   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8442dadb356"
	I0819 11:18:54.535516   14497 logs.go:123] Gathering logs for kube-proxy [a32341ff1eda] ...
	I0819 11:18:54.535528   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32341ff1eda"
	I0819 11:18:54.548633   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:18:54.548644   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:18:54.625231   14497 logs.go:123] Gathering logs for etcd [0b6d1c937b20] ...
	I0819 11:18:54.625241   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b6d1c937b20"
	I0819 11:18:54.638781   14497 logs.go:123] Gathering logs for etcd [89cb092cb057] ...
	I0819 11:18:54.638792   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cb092cb057"
	I0819 11:18:54.656447   14497 logs.go:123] Gathering logs for kube-controller-manager [8908fc229d49] ...
	I0819 11:18:54.656458   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8908fc229d49"
	I0819 11:18:54.674405   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:18:54.674415   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:18:54.714127   14497 logs.go:123] Gathering logs for kube-apiserver [73b6ea415881] ...
	I0819 11:18:54.714137   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73b6ea415881"
	I0819 11:18:54.726934   14497 logs.go:123] Gathering logs for kube-scheduler [27115d75bfca] ...
	I0819 11:18:54.726948   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27115d75bfca"
	I0819 11:18:54.738517   14497 logs.go:123] Gathering logs for kube-controller-manager [f94b194fc3ad] ...
	I0819 11:18:54.738531   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94b194fc3ad"
	I0819 11:18:54.749762   14497 logs.go:123] Gathering logs for storage-provisioner [0e3b67602bd8] ...
	I0819 11:18:54.749773   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3b67602bd8"
	I0819 11:18:54.760860   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:18:54.760872   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:18:54.765342   14497 logs.go:123] Gathering logs for coredns [2c90137ecacc] ...
	I0819 11:18:54.765348   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c90137ecacc"
	I0819 11:18:57.278441   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:19:02.281070   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:19:02.281431   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:19:02.315335   14497 logs.go:276] 2 containers: [c75ea31785de 73b6ea415881]
	I0819 11:19:02.315467   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:19:02.341525   14497 logs.go:276] 2 containers: [0b6d1c937b20 89cb092cb057]
	I0819 11:19:02.341603   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:19:02.356161   14497 logs.go:276] 1 containers: [2c90137ecacc]
	I0819 11:19:02.356237   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:19:02.367533   14497 logs.go:276] 2 containers: [27115d75bfca d8442dadb356]
	I0819 11:19:02.367603   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:19:02.378695   14497 logs.go:276] 1 containers: [a32341ff1eda]
	I0819 11:19:02.378756   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:19:02.389073   14497 logs.go:276] 2 containers: [8908fc229d49 f94b194fc3ad]
	I0819 11:19:02.389141   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:19:02.399540   14497 logs.go:276] 0 containers: []
	W0819 11:19:02.399553   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:19:02.399611   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:19:02.409784   14497 logs.go:276] 2 containers: [2338601903cd 0e3b67602bd8]
	I0819 11:19:02.409803   14497 logs.go:123] Gathering logs for etcd [89cb092cb057] ...
	I0819 11:19:02.409809   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cb092cb057"
	I0819 11:19:02.431264   14497 logs.go:123] Gathering logs for coredns [2c90137ecacc] ...
	I0819 11:19:02.431276   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c90137ecacc"
	I0819 11:19:02.442438   14497 logs.go:123] Gathering logs for kube-scheduler [27115d75bfca] ...
	I0819 11:19:02.442450   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27115d75bfca"
	I0819 11:19:02.455772   14497 logs.go:123] Gathering logs for storage-provisioner [0e3b67602bd8] ...
	I0819 11:19:02.455783   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3b67602bd8"
	I0819 11:19:02.467303   14497 logs.go:123] Gathering logs for kube-apiserver [73b6ea415881] ...
	I0819 11:19:02.467316   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73b6ea415881"
	I0819 11:19:02.480295   14497 logs.go:123] Gathering logs for kube-scheduler [d8442dadb356] ...
	I0819 11:19:02.480306   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8442dadb356"
	I0819 11:19:02.493004   14497 logs.go:123] Gathering logs for kube-proxy [a32341ff1eda] ...
	I0819 11:19:02.493015   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32341ff1eda"
	I0819 11:19:02.504435   14497 logs.go:123] Gathering logs for kube-controller-manager [8908fc229d49] ...
	I0819 11:19:02.504448   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8908fc229d49"
	I0819 11:19:02.521201   14497 logs.go:123] Gathering logs for kube-controller-manager [f94b194fc3ad] ...
	I0819 11:19:02.521211   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94b194fc3ad"
	I0819 11:19:02.532575   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:19:02.532589   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:19:02.568543   14497 logs.go:123] Gathering logs for etcd [0b6d1c937b20] ...
	I0819 11:19:02.568549   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b6d1c937b20"
	I0819 11:19:02.582996   14497 logs.go:123] Gathering logs for storage-provisioner [2338601903cd] ...
	I0819 11:19:02.583009   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2338601903cd"
	I0819 11:19:02.594661   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:19:02.594671   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:19:02.599285   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:19:02.599293   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:19:02.634546   14497 logs.go:123] Gathering logs for kube-apiserver [c75ea31785de] ...
	I0819 11:19:02.634556   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75ea31785de"
	I0819 11:19:02.649076   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:19:02.649095   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:19:02.673820   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:19:02.673829   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:19:05.187518   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:19:10.190085   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:19:10.190514   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:19:10.225558   14497 logs.go:276] 2 containers: [c75ea31785de 73b6ea415881]
	I0819 11:19:10.225695   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:19:10.248132   14497 logs.go:276] 2 containers: [0b6d1c937b20 89cb092cb057]
	I0819 11:19:10.248236   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:19:10.267033   14497 logs.go:276] 1 containers: [2c90137ecacc]
	I0819 11:19:10.267097   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:19:10.278580   14497 logs.go:276] 2 containers: [27115d75bfca d8442dadb356]
	I0819 11:19:10.278651   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:19:10.289070   14497 logs.go:276] 1 containers: [a32341ff1eda]
	I0819 11:19:10.289139   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:19:10.299947   14497 logs.go:276] 2 containers: [8908fc229d49 f94b194fc3ad]
	I0819 11:19:10.300017   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:19:10.310004   14497 logs.go:276] 0 containers: []
	W0819 11:19:10.310015   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:19:10.310066   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:19:10.321503   14497 logs.go:276] 2 containers: [2338601903cd 0e3b67602bd8]
	I0819 11:19:10.321529   14497 logs.go:123] Gathering logs for coredns [2c90137ecacc] ...
	I0819 11:19:10.321540   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c90137ecacc"
	I0819 11:19:10.332455   14497 logs.go:123] Gathering logs for etcd [0b6d1c937b20] ...
	I0819 11:19:10.332465   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b6d1c937b20"
	I0819 11:19:10.346831   14497 logs.go:123] Gathering logs for storage-provisioner [2338601903cd] ...
	I0819 11:19:10.346842   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2338601903cd"
	I0819 11:19:10.358177   14497 logs.go:123] Gathering logs for kube-apiserver [73b6ea415881] ...
	I0819 11:19:10.358187   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73b6ea415881"
	I0819 11:19:10.370495   14497 logs.go:123] Gathering logs for kube-scheduler [27115d75bfca] ...
	I0819 11:19:10.370505   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27115d75bfca"
	I0819 11:19:10.384759   14497 logs.go:123] Gathering logs for kube-controller-manager [f94b194fc3ad] ...
	I0819 11:19:10.384770   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94b194fc3ad"
	I0819 11:19:10.399628   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:19:10.399639   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:19:10.425523   14497 logs.go:123] Gathering logs for etcd [89cb092cb057] ...
	I0819 11:19:10.425532   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cb092cb057"
	I0819 11:19:10.444276   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:19:10.444287   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:19:10.448549   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:19:10.448557   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:19:10.484053   14497 logs.go:123] Gathering logs for kube-apiserver [c75ea31785de] ...
	I0819 11:19:10.484065   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75ea31785de"
	I0819 11:19:10.497949   14497 logs.go:123] Gathering logs for kube-scheduler [d8442dadb356] ...
	I0819 11:19:10.497961   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8442dadb356"
	I0819 11:19:10.509727   14497 logs.go:123] Gathering logs for kube-proxy [a32341ff1eda] ...
	I0819 11:19:10.509741   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32341ff1eda"
	I0819 11:19:10.521302   14497 logs.go:123] Gathering logs for kube-controller-manager [8908fc229d49] ...
	I0819 11:19:10.521315   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8908fc229d49"
	I0819 11:19:10.539629   14497 logs.go:123] Gathering logs for storage-provisioner [0e3b67602bd8] ...
	I0819 11:19:10.539641   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3b67602bd8"
	I0819 11:19:10.554814   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:19:10.554825   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:19:10.591037   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:19:10.591046   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:19:13.104345   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:19:18.107170   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:19:18.107538   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:19:18.147274   14497 logs.go:276] 2 containers: [c75ea31785de 73b6ea415881]
	I0819 11:19:18.147413   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:19:18.174124   14497 logs.go:276] 2 containers: [0b6d1c937b20 89cb092cb057]
	I0819 11:19:18.174226   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:19:18.188370   14497 logs.go:276] 1 containers: [2c90137ecacc]
	I0819 11:19:18.188450   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:19:18.200719   14497 logs.go:276] 2 containers: [27115d75bfca d8442dadb356]
	I0819 11:19:18.200793   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:19:18.211063   14497 logs.go:276] 1 containers: [a32341ff1eda]
	I0819 11:19:18.211152   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:19:18.222215   14497 logs.go:276] 2 containers: [8908fc229d49 f94b194fc3ad]
	I0819 11:19:18.222280   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:19:18.233797   14497 logs.go:276] 0 containers: []
	W0819 11:19:18.233812   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:19:18.233866   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:19:18.244545   14497 logs.go:276] 2 containers: [2338601903cd 0e3b67602bd8]
	I0819 11:19:18.244564   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:19:18.244570   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:19:18.280633   14497 logs.go:123] Gathering logs for etcd [0b6d1c937b20] ...
	I0819 11:19:18.280642   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b6d1c937b20"
	I0819 11:19:18.294499   14497 logs.go:123] Gathering logs for kube-scheduler [27115d75bfca] ...
	I0819 11:19:18.294510   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27115d75bfca"
	I0819 11:19:18.308450   14497 logs.go:123] Gathering logs for kube-proxy [a32341ff1eda] ...
	I0819 11:19:18.308463   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32341ff1eda"
	I0819 11:19:18.320665   14497 logs.go:123] Gathering logs for kube-controller-manager [f94b194fc3ad] ...
	I0819 11:19:18.320677   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94b194fc3ad"
	I0819 11:19:18.332262   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:19:18.332273   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:19:18.361538   14497 logs.go:123] Gathering logs for kube-apiserver [73b6ea415881] ...
	I0819 11:19:18.361552   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73b6ea415881"
	I0819 11:19:18.374682   14497 logs.go:123] Gathering logs for etcd [89cb092cb057] ...
	I0819 11:19:18.374695   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cb092cb057"
	I0819 11:19:18.389411   14497 logs.go:123] Gathering logs for kube-scheduler [d8442dadb356] ...
	I0819 11:19:18.389420   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8442dadb356"
	I0819 11:19:18.401036   14497 logs.go:123] Gathering logs for kube-apiserver [c75ea31785de] ...
	I0819 11:19:18.401048   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75ea31785de"
	I0819 11:19:18.415793   14497 logs.go:123] Gathering logs for coredns [2c90137ecacc] ...
	I0819 11:19:18.415804   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c90137ecacc"
	I0819 11:19:18.426938   14497 logs.go:123] Gathering logs for kube-controller-manager [8908fc229d49] ...
	I0819 11:19:18.426949   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8908fc229d49"
	I0819 11:19:18.446313   14497 logs.go:123] Gathering logs for storage-provisioner [2338601903cd] ...
	I0819 11:19:18.446325   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2338601903cd"
	I0819 11:19:18.458184   14497 logs.go:123] Gathering logs for storage-provisioner [0e3b67602bd8] ...
	I0819 11:19:18.458196   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3b67602bd8"
	I0819 11:19:18.472625   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:19:18.472637   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:19:18.476993   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:19:18.477001   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:19:18.515874   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:19:18.515887   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:19:21.029754   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:19:26.032377   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:19:26.032873   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:19:26.072064   14497 logs.go:276] 2 containers: [c75ea31785de 73b6ea415881]
	I0819 11:19:26.072190   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:19:26.093625   14497 logs.go:276] 2 containers: [0b6d1c937b20 89cb092cb057]
	I0819 11:19:26.093730   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:19:26.109189   14497 logs.go:276] 1 containers: [2c90137ecacc]
	I0819 11:19:26.109269   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:19:26.121326   14497 logs.go:276] 2 containers: [27115d75bfca d8442dadb356]
	I0819 11:19:26.121395   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:19:26.132375   14497 logs.go:276] 1 containers: [a32341ff1eda]
	I0819 11:19:26.132431   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:19:26.142886   14497 logs.go:276] 2 containers: [8908fc229d49 f94b194fc3ad]
	I0819 11:19:26.142952   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:19:26.153657   14497 logs.go:276] 0 containers: []
	W0819 11:19:26.153671   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:19:26.153729   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:19:26.164197   14497 logs.go:276] 2 containers: [2338601903cd 0e3b67602bd8]
	I0819 11:19:26.164212   14497 logs.go:123] Gathering logs for etcd [89cb092cb057] ...
	I0819 11:19:26.164217   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cb092cb057"
	I0819 11:19:26.178296   14497 logs.go:123] Gathering logs for storage-provisioner [2338601903cd] ...
	I0819 11:19:26.178307   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2338601903cd"
	I0819 11:19:26.190155   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:19:26.190165   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:19:26.215468   14497 logs.go:123] Gathering logs for kube-apiserver [c75ea31785de] ...
	I0819 11:19:26.215476   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75ea31785de"
	I0819 11:19:26.229286   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:19:26.229295   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:19:26.234371   14497 logs.go:123] Gathering logs for coredns [2c90137ecacc] ...
	I0819 11:19:26.234381   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c90137ecacc"
	I0819 11:19:26.245508   14497 logs.go:123] Gathering logs for kube-controller-manager [8908fc229d49] ...
	I0819 11:19:26.245518   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8908fc229d49"
	I0819 11:19:26.266082   14497 logs.go:123] Gathering logs for kube-controller-manager [f94b194fc3ad] ...
	I0819 11:19:26.266096   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94b194fc3ad"
	I0819 11:19:26.278436   14497 logs.go:123] Gathering logs for storage-provisioner [0e3b67602bd8] ...
	I0819 11:19:26.278447   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3b67602bd8"
	I0819 11:19:26.292301   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:19:26.292315   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:19:26.304237   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:19:26.304246   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:19:26.341440   14497 logs.go:123] Gathering logs for kube-scheduler [d8442dadb356] ...
	I0819 11:19:26.341447   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8442dadb356"
	I0819 11:19:26.353004   14497 logs.go:123] Gathering logs for etcd [0b6d1c937b20] ...
	I0819 11:19:26.353016   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b6d1c937b20"
	I0819 11:19:26.366447   14497 logs.go:123] Gathering logs for kube-apiserver [73b6ea415881] ...
	I0819 11:19:26.366460   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73b6ea415881"
	I0819 11:19:26.378706   14497 logs.go:123] Gathering logs for kube-scheduler [27115d75bfca] ...
	I0819 11:19:26.378718   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27115d75bfca"
	I0819 11:19:26.390200   14497 logs.go:123] Gathering logs for kube-proxy [a32341ff1eda] ...
	I0819 11:19:26.390209   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32341ff1eda"
	I0819 11:19:26.401965   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:19:26.401975   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:19:28.940019   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:19:33.942892   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:19:33.943326   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:19:33.986891   14497 logs.go:276] 2 containers: [c75ea31785de 73b6ea415881]
	I0819 11:19:33.987032   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:19:34.007364   14497 logs.go:276] 2 containers: [0b6d1c937b20 89cb092cb057]
	I0819 11:19:34.007453   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:19:34.022125   14497 logs.go:276] 1 containers: [2c90137ecacc]
	I0819 11:19:34.022205   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:19:34.034296   14497 logs.go:276] 2 containers: [27115d75bfca d8442dadb356]
	I0819 11:19:34.034369   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:19:34.045898   14497 logs.go:276] 1 containers: [a32341ff1eda]
	I0819 11:19:34.045967   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:19:34.063897   14497 logs.go:276] 2 containers: [8908fc229d49 f94b194fc3ad]
	I0819 11:19:34.063968   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:19:34.073747   14497 logs.go:276] 0 containers: []
	W0819 11:19:34.073757   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:19:34.073810   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:19:34.085134   14497 logs.go:276] 2 containers: [2338601903cd 0e3b67602bd8]
	I0819 11:19:34.085151   14497 logs.go:123] Gathering logs for kube-proxy [a32341ff1eda] ...
	I0819 11:19:34.085157   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32341ff1eda"
	I0819 11:19:34.102914   14497 logs.go:123] Gathering logs for kube-controller-manager [8908fc229d49] ...
	I0819 11:19:34.102924   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8908fc229d49"
	I0819 11:19:34.122062   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:19:34.122072   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:19:34.126672   14497 logs.go:123] Gathering logs for etcd [0b6d1c937b20] ...
	I0819 11:19:34.126682   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b6d1c937b20"
	I0819 11:19:34.140822   14497 logs.go:123] Gathering logs for coredns [2c90137ecacc] ...
	I0819 11:19:34.140830   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c90137ecacc"
	I0819 11:19:34.152445   14497 logs.go:123] Gathering logs for kube-scheduler [27115d75bfca] ...
	I0819 11:19:34.152455   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27115d75bfca"
	I0819 11:19:34.164164   14497 logs.go:123] Gathering logs for kube-controller-manager [f94b194fc3ad] ...
	I0819 11:19:34.164174   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94b194fc3ad"
	I0819 11:19:34.175365   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:19:34.175373   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:19:34.201294   14497 logs.go:123] Gathering logs for storage-provisioner [2338601903cd] ...
	I0819 11:19:34.201305   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2338601903cd"
	I0819 11:19:34.213512   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:19:34.213522   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:19:34.251396   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:19:34.251411   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:19:34.289117   14497 logs.go:123] Gathering logs for kube-apiserver [c75ea31785de] ...
	I0819 11:19:34.289129   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75ea31785de"
	I0819 11:19:34.303252   14497 logs.go:123] Gathering logs for etcd [89cb092cb057] ...
	I0819 11:19:34.303262   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cb092cb057"
	I0819 11:19:34.323067   14497 logs.go:123] Gathering logs for kube-apiserver [73b6ea415881] ...
	I0819 11:19:34.323079   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73b6ea415881"
	I0819 11:19:34.335445   14497 logs.go:123] Gathering logs for kube-scheduler [d8442dadb356] ...
	I0819 11:19:34.335456   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8442dadb356"
	I0819 11:19:34.347155   14497 logs.go:123] Gathering logs for storage-provisioner [0e3b67602bd8] ...
	I0819 11:19:34.347166   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3b67602bd8"
	I0819 11:19:34.358797   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:19:34.358808   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:19:36.872634   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:19:41.875323   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:19:41.875694   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:19:41.907730   14497 logs.go:276] 2 containers: [c75ea31785de 73b6ea415881]
	I0819 11:19:41.907858   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:19:41.928483   14497 logs.go:276] 2 containers: [0b6d1c937b20 89cb092cb057]
	I0819 11:19:41.928563   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:19:41.941984   14497 logs.go:276] 1 containers: [2c90137ecacc]
	I0819 11:19:41.942057   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:19:41.954239   14497 logs.go:276] 2 containers: [27115d75bfca d8442dadb356]
	I0819 11:19:41.954310   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:19:41.969323   14497 logs.go:276] 1 containers: [a32341ff1eda]
	I0819 11:19:41.969366   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:19:41.979989   14497 logs.go:276] 2 containers: [8908fc229d49 f94b194fc3ad]
	I0819 11:19:41.980055   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:19:41.990282   14497 logs.go:276] 0 containers: []
	W0819 11:19:41.990291   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:19:41.990333   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:19:42.008029   14497 logs.go:276] 2 containers: [2338601903cd 0e3b67602bd8]
	I0819 11:19:42.008047   14497 logs.go:123] Gathering logs for etcd [0b6d1c937b20] ...
	I0819 11:19:42.008053   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b6d1c937b20"
	I0819 11:19:42.021796   14497 logs.go:123] Gathering logs for kube-controller-manager [8908fc229d49] ...
	I0819 11:19:42.021809   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8908fc229d49"
	I0819 11:19:42.045333   14497 logs.go:123] Gathering logs for storage-provisioner [0e3b67602bd8] ...
	I0819 11:19:42.045345   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3b67602bd8"
	I0819 11:19:42.064532   14497 logs.go:123] Gathering logs for coredns [2c90137ecacc] ...
	I0819 11:19:42.064543   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c90137ecacc"
	I0819 11:19:42.083835   14497 logs.go:123] Gathering logs for kube-scheduler [27115d75bfca] ...
	I0819 11:19:42.083848   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27115d75bfca"
	I0819 11:19:42.095457   14497 logs.go:123] Gathering logs for kube-scheduler [d8442dadb356] ...
	I0819 11:19:42.095469   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8442dadb356"
	I0819 11:19:42.107250   14497 logs.go:123] Gathering logs for kube-proxy [a32341ff1eda] ...
	I0819 11:19:42.107259   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32341ff1eda"
	I0819 11:19:42.124338   14497 logs.go:123] Gathering logs for kube-controller-manager [f94b194fc3ad] ...
	I0819 11:19:42.124351   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94b194fc3ad"
	I0819 11:19:42.135884   14497 logs.go:123] Gathering logs for storage-provisioner [2338601903cd] ...
	I0819 11:19:42.135895   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2338601903cd"
	I0819 11:19:42.149972   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:19:42.149985   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:19:42.154759   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:19:42.154768   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:19:42.189329   14497 logs.go:123] Gathering logs for kube-apiserver [c75ea31785de] ...
	I0819 11:19:42.189339   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75ea31785de"
	I0819 11:19:42.203754   14497 logs.go:123] Gathering logs for kube-apiserver [73b6ea415881] ...
	I0819 11:19:42.203764   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73b6ea415881"
	I0819 11:19:42.216477   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:19:42.216486   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:19:42.255383   14497 logs.go:123] Gathering logs for etcd [89cb092cb057] ...
	I0819 11:19:42.255392   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cb092cb057"
	I0819 11:19:42.269640   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:19:42.269650   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:19:42.295630   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:19:42.295638   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:19:44.810006   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:19:49.812374   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:19:49.812690   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:19:49.845886   14497 logs.go:276] 2 containers: [c75ea31785de 73b6ea415881]
	I0819 11:19:49.845999   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:19:49.869936   14497 logs.go:276] 2 containers: [0b6d1c937b20 89cb092cb057]
	I0819 11:19:49.870026   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:19:49.883682   14497 logs.go:276] 1 containers: [2c90137ecacc]
	I0819 11:19:49.883764   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:19:49.895108   14497 logs.go:276] 2 containers: [27115d75bfca d8442dadb356]
	I0819 11:19:49.895167   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:19:49.905616   14497 logs.go:276] 1 containers: [a32341ff1eda]
	I0819 11:19:49.905678   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:19:49.916169   14497 logs.go:276] 2 containers: [8908fc229d49 f94b194fc3ad]
	I0819 11:19:49.916250   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:19:49.930948   14497 logs.go:276] 0 containers: []
	W0819 11:19:49.930962   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:19:49.931015   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:19:49.941040   14497 logs.go:276] 2 containers: [2338601903cd 0e3b67602bd8]
	I0819 11:19:49.941060   14497 logs.go:123] Gathering logs for kube-scheduler [d8442dadb356] ...
	I0819 11:19:49.941065   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8442dadb356"
	I0819 11:19:49.953939   14497 logs.go:123] Gathering logs for kube-proxy [a32341ff1eda] ...
	I0819 11:19:49.953952   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32341ff1eda"
	I0819 11:19:49.965753   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:19:49.965766   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:19:49.990286   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:19:49.990296   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:19:50.026619   14497 logs.go:123] Gathering logs for kube-apiserver [73b6ea415881] ...
	I0819 11:19:50.026627   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73b6ea415881"
	I0819 11:19:50.039564   14497 logs.go:123] Gathering logs for etcd [0b6d1c937b20] ...
	I0819 11:19:50.039581   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b6d1c937b20"
	I0819 11:19:50.053293   14497 logs.go:123] Gathering logs for etcd [89cb092cb057] ...
	I0819 11:19:50.053310   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cb092cb057"
	I0819 11:19:50.067585   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:19:50.067599   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:19:50.085508   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:19:50.085521   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:19:50.090297   14497 logs.go:123] Gathering logs for kube-scheduler [27115d75bfca] ...
	I0819 11:19:50.090304   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27115d75bfca"
	I0819 11:19:50.101839   14497 logs.go:123] Gathering logs for kube-controller-manager [8908fc229d49] ...
	I0819 11:19:50.101849   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8908fc229d49"
	I0819 11:19:50.118851   14497 logs.go:123] Gathering logs for kube-controller-manager [f94b194fc3ad] ...
	I0819 11:19:50.118867   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94b194fc3ad"
	I0819 11:19:50.130316   14497 logs.go:123] Gathering logs for storage-provisioner [2338601903cd] ...
	I0819 11:19:50.130327   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2338601903cd"
	I0819 11:19:50.141893   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:19:50.141904   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:19:50.179603   14497 logs.go:123] Gathering logs for kube-apiserver [c75ea31785de] ...
	I0819 11:19:50.179615   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75ea31785de"
	I0819 11:19:50.193916   14497 logs.go:123] Gathering logs for coredns [2c90137ecacc] ...
	I0819 11:19:50.193928   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c90137ecacc"
	I0819 11:19:50.208529   14497 logs.go:123] Gathering logs for storage-provisioner [0e3b67602bd8] ...
	I0819 11:19:50.208541   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3b67602bd8"
	I0819 11:19:52.724695   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:19:57.726878   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:19:57.727220   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:19:57.762320   14497 logs.go:276] 2 containers: [c75ea31785de 73b6ea415881]
	I0819 11:19:57.762441   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:19:57.783182   14497 logs.go:276] 2 containers: [0b6d1c937b20 89cb092cb057]
	I0819 11:19:57.783265   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:19:57.800019   14497 logs.go:276] 1 containers: [2c90137ecacc]
	I0819 11:19:57.800089   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:19:57.811951   14497 logs.go:276] 2 containers: [27115d75bfca d8442dadb356]
	I0819 11:19:57.812036   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:19:57.823334   14497 logs.go:276] 1 containers: [a32341ff1eda]
	I0819 11:19:57.823406   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:19:57.842177   14497 logs.go:276] 2 containers: [8908fc229d49 f94b194fc3ad]
	I0819 11:19:57.842240   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:19:57.853344   14497 logs.go:276] 0 containers: []
	W0819 11:19:57.853355   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:19:57.853407   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:19:57.867705   14497 logs.go:276] 2 containers: [2338601903cd 0e3b67602bd8]
	I0819 11:19:57.867721   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:19:57.867725   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:19:57.908494   14497 logs.go:123] Gathering logs for kube-apiserver [c75ea31785de] ...
	I0819 11:19:57.908506   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75ea31785de"
	I0819 11:19:57.923321   14497 logs.go:123] Gathering logs for etcd [0b6d1c937b20] ...
	I0819 11:19:57.923333   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b6d1c937b20"
	I0819 11:19:57.937821   14497 logs.go:123] Gathering logs for coredns [2c90137ecacc] ...
	I0819 11:19:57.937832   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c90137ecacc"
	I0819 11:19:57.949103   14497 logs.go:123] Gathering logs for kube-controller-manager [8908fc229d49] ...
	I0819 11:19:57.949112   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8908fc229d49"
	I0819 11:19:57.966299   14497 logs.go:123] Gathering logs for storage-provisioner [2338601903cd] ...
	I0819 11:19:57.966309   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2338601903cd"
	I0819 11:19:57.977451   14497 logs.go:123] Gathering logs for storage-provisioner [0e3b67602bd8] ...
	I0819 11:19:57.977463   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3b67602bd8"
	I0819 11:19:57.989296   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:19:57.989307   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:19:58.015091   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:19:58.015100   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:19:58.061026   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:19:58.061038   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:19:58.065307   14497 logs.go:123] Gathering logs for kube-apiserver [73b6ea415881] ...
	I0819 11:19:58.065316   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73b6ea415881"
	I0819 11:19:58.091500   14497 logs.go:123] Gathering logs for etcd [89cb092cb057] ...
	I0819 11:19:58.091511   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cb092cb057"
	I0819 11:19:58.106012   14497 logs.go:123] Gathering logs for kube-scheduler [d8442dadb356] ...
	I0819 11:19:58.106024   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8442dadb356"
	I0819 11:19:58.118380   14497 logs.go:123] Gathering logs for kube-controller-manager [f94b194fc3ad] ...
	I0819 11:19:58.118394   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94b194fc3ad"
	I0819 11:19:58.129986   14497 logs.go:123] Gathering logs for kube-scheduler [27115d75bfca] ...
	I0819 11:19:58.129997   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27115d75bfca"
	I0819 11:19:58.141691   14497 logs.go:123] Gathering logs for kube-proxy [a32341ff1eda] ...
	I0819 11:19:58.141703   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32341ff1eda"
	I0819 11:19:58.153616   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:19:58.153628   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:20:00.667802   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:20:05.670205   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:20:05.670400   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:20:05.682346   14497 logs.go:276] 2 containers: [c75ea31785de 73b6ea415881]
	I0819 11:20:05.682414   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:20:05.693199   14497 logs.go:276] 2 containers: [0b6d1c937b20 89cb092cb057]
	I0819 11:20:05.693267   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:20:05.704042   14497 logs.go:276] 1 containers: [2c90137ecacc]
	I0819 11:20:05.704111   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:20:05.715284   14497 logs.go:276] 2 containers: [27115d75bfca d8442dadb356]
	I0819 11:20:05.715355   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:20:05.729267   14497 logs.go:276] 1 containers: [a32341ff1eda]
	I0819 11:20:05.729337   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:20:05.740184   14497 logs.go:276] 2 containers: [8908fc229d49 f94b194fc3ad]
	I0819 11:20:05.740248   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:20:05.749892   14497 logs.go:276] 0 containers: []
	W0819 11:20:05.749903   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:20:05.749956   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:20:05.760856   14497 logs.go:276] 2 containers: [2338601903cd 0e3b67602bd8]
	I0819 11:20:05.760873   14497 logs.go:123] Gathering logs for etcd [89cb092cb057] ...
	I0819 11:20:05.760878   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cb092cb057"
	I0819 11:20:05.775600   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:20:05.775612   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:20:05.814808   14497 logs.go:123] Gathering logs for kube-apiserver [c75ea31785de] ...
	I0819 11:20:05.814821   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75ea31785de"
	I0819 11:20:05.829386   14497 logs.go:123] Gathering logs for kube-controller-manager [8908fc229d49] ...
	I0819 11:20:05.829396   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8908fc229d49"
	I0819 11:20:05.851164   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:20:05.851175   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:20:05.863303   14497 logs.go:123] Gathering logs for kube-scheduler [d8442dadb356] ...
	I0819 11:20:05.863314   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8442dadb356"
	I0819 11:20:05.875368   14497 logs.go:123] Gathering logs for kube-proxy [a32341ff1eda] ...
	I0819 11:20:05.875379   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32341ff1eda"
	I0819 11:20:05.887717   14497 logs.go:123] Gathering logs for kube-scheduler [27115d75bfca] ...
	I0819 11:20:05.887728   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27115d75bfca"
	I0819 11:20:05.899910   14497 logs.go:123] Gathering logs for kube-controller-manager [f94b194fc3ad] ...
	I0819 11:20:05.899921   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94b194fc3ad"
	I0819 11:20:05.911892   14497 logs.go:123] Gathering logs for storage-provisioner [2338601903cd] ...
	I0819 11:20:05.911903   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2338601903cd"
	I0819 11:20:05.932095   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:20:05.932105   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:20:05.957895   14497 logs.go:123] Gathering logs for kube-apiserver [73b6ea415881] ...
	I0819 11:20:05.957904   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73b6ea415881"
	I0819 11:20:05.974429   14497 logs.go:123] Gathering logs for etcd [0b6d1c937b20] ...
	I0819 11:20:05.974440   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b6d1c937b20"
	I0819 11:20:05.989273   14497 logs.go:123] Gathering logs for coredns [2c90137ecacc] ...
	I0819 11:20:05.989283   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c90137ecacc"
	I0819 11:20:06.001229   14497 logs.go:123] Gathering logs for storage-provisioner [0e3b67602bd8] ...
	I0819 11:20:06.001240   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3b67602bd8"
	I0819 11:20:06.013245   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:20:06.013256   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:20:06.017784   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:20:06.017791   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:20:08.556088   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:20:13.558464   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:20:13.558952   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:20:13.600182   14497 logs.go:276] 2 containers: [c75ea31785de 73b6ea415881]
	I0819 11:20:13.600315   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:20:13.622205   14497 logs.go:276] 2 containers: [0b6d1c937b20 89cb092cb057]
	I0819 11:20:13.622303   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:20:13.637061   14497 logs.go:276] 1 containers: [2c90137ecacc]
	I0819 11:20:13.637125   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:20:13.650348   14497 logs.go:276] 2 containers: [27115d75bfca d8442dadb356]
	I0819 11:20:13.650418   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:20:13.661663   14497 logs.go:276] 1 containers: [a32341ff1eda]
	I0819 11:20:13.661727   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:20:13.672767   14497 logs.go:276] 2 containers: [8908fc229d49 f94b194fc3ad]
	I0819 11:20:13.672835   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:20:13.683435   14497 logs.go:276] 0 containers: []
	W0819 11:20:13.683446   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:20:13.683499   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:20:13.694308   14497 logs.go:276] 2 containers: [2338601903cd 0e3b67602bd8]
	I0819 11:20:13.694328   14497 logs.go:123] Gathering logs for kube-apiserver [c75ea31785de] ...
	I0819 11:20:13.694334   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75ea31785de"
	I0819 11:20:13.711230   14497 logs.go:123] Gathering logs for kube-apiserver [73b6ea415881] ...
	I0819 11:20:13.711241   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73b6ea415881"
	I0819 11:20:13.724766   14497 logs.go:123] Gathering logs for kube-proxy [a32341ff1eda] ...
	I0819 11:20:13.724778   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32341ff1eda"
	I0819 11:20:13.744785   14497 logs.go:123] Gathering logs for storage-provisioner [2338601903cd] ...
	I0819 11:20:13.744794   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2338601903cd"
	I0819 11:20:13.756954   14497 logs.go:123] Gathering logs for storage-provisioner [0e3b67602bd8] ...
	I0819 11:20:13.756964   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3b67602bd8"
	I0819 11:20:13.768819   14497 logs.go:123] Gathering logs for etcd [89cb092cb057] ...
	I0819 11:20:13.768830   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cb092cb057"
	I0819 11:20:13.784084   14497 logs.go:123] Gathering logs for coredns [2c90137ecacc] ...
	I0819 11:20:13.784096   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c90137ecacc"
	I0819 11:20:13.798583   14497 logs.go:123] Gathering logs for kube-controller-manager [8908fc229d49] ...
	I0819 11:20:13.798596   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8908fc229d49"
	I0819 11:20:13.822679   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:20:13.822690   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:20:13.848190   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:20:13.848199   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:20:13.860031   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:20:13.860042   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:20:13.898997   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:20:13.899006   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:20:13.935355   14497 logs.go:123] Gathering logs for kube-scheduler [27115d75bfca] ...
	I0819 11:20:13.935368   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27115d75bfca"
	I0819 11:20:13.948128   14497 logs.go:123] Gathering logs for kube-controller-manager [f94b194fc3ad] ...
	I0819 11:20:13.948138   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94b194fc3ad"
	I0819 11:20:13.967893   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:20:13.967905   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:20:13.972261   14497 logs.go:123] Gathering logs for etcd [0b6d1c937b20] ...
	I0819 11:20:13.972267   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b6d1c937b20"
	I0819 11:20:13.986821   14497 logs.go:123] Gathering logs for kube-scheduler [d8442dadb356] ...
	I0819 11:20:13.986833   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8442dadb356"
	I0819 11:20:16.510368   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:20:21.511110   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:20:21.511217   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:20:21.526692   14497 logs.go:276] 2 containers: [c75ea31785de 73b6ea415881]
	I0819 11:20:21.526766   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:20:21.538947   14497 logs.go:276] 2 containers: [0b6d1c937b20 89cb092cb057]
	I0819 11:20:21.539023   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:20:21.550538   14497 logs.go:276] 1 containers: [2c90137ecacc]
	I0819 11:20:21.550610   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:20:21.566358   14497 logs.go:276] 2 containers: [27115d75bfca d8442dadb356]
	I0819 11:20:21.566432   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:20:21.578655   14497 logs.go:276] 1 containers: [a32341ff1eda]
	I0819 11:20:21.578732   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:20:21.590911   14497 logs.go:276] 2 containers: [8908fc229d49 f94b194fc3ad]
	I0819 11:20:21.590980   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:20:21.602120   14497 logs.go:276] 0 containers: []
	W0819 11:20:21.602134   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:20:21.602202   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:20:21.614361   14497 logs.go:276] 2 containers: [2338601903cd 0e3b67602bd8]
	I0819 11:20:21.614381   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:20:21.614388   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:20:21.639813   14497 logs.go:123] Gathering logs for etcd [0b6d1c937b20] ...
	I0819 11:20:21.639829   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b6d1c937b20"
	I0819 11:20:21.665868   14497 logs.go:123] Gathering logs for kube-scheduler [27115d75bfca] ...
	I0819 11:20:21.665883   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27115d75bfca"
	I0819 11:20:21.679177   14497 logs.go:123] Gathering logs for storage-provisioner [2338601903cd] ...
	I0819 11:20:21.679187   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2338601903cd"
	I0819 11:20:21.696322   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:20:21.696332   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:20:21.701520   14497 logs.go:123] Gathering logs for kube-apiserver [c75ea31785de] ...
	I0819 11:20:21.701534   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75ea31785de"
	I0819 11:20:21.716885   14497 logs.go:123] Gathering logs for coredns [2c90137ecacc] ...
	I0819 11:20:21.716902   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c90137ecacc"
	I0819 11:20:21.729945   14497 logs.go:123] Gathering logs for kube-controller-manager [8908fc229d49] ...
	I0819 11:20:21.729956   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8908fc229d49"
	I0819 11:20:21.749638   14497 logs.go:123] Gathering logs for kube-controller-manager [f94b194fc3ad] ...
	I0819 11:20:21.749655   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94b194fc3ad"
	I0819 11:20:21.764253   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:20:21.764264   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:20:21.803171   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:20:21.803188   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:20:21.842894   14497 logs.go:123] Gathering logs for kube-apiserver [73b6ea415881] ...
	I0819 11:20:21.842910   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73b6ea415881"
	I0819 11:20:21.857258   14497 logs.go:123] Gathering logs for etcd [89cb092cb057] ...
	I0819 11:20:21.857270   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cb092cb057"
	I0819 11:20:21.873172   14497 logs.go:123] Gathering logs for kube-scheduler [d8442dadb356] ...
	I0819 11:20:21.873188   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8442dadb356"
	I0819 11:20:21.887317   14497 logs.go:123] Gathering logs for kube-proxy [a32341ff1eda] ...
	I0819 11:20:21.887328   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32341ff1eda"
	I0819 11:20:21.900945   14497 logs.go:123] Gathering logs for storage-provisioner [0e3b67602bd8] ...
	I0819 11:20:21.900957   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3b67602bd8"
	I0819 11:20:21.913815   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:20:21.913828   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:20:24.428446   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:20:29.431115   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:20:29.431362   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:20:29.451847   14497 logs.go:276] 2 containers: [c75ea31785de 73b6ea415881]
	I0819 11:20:29.451954   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:20:29.467406   14497 logs.go:276] 2 containers: [0b6d1c937b20 89cb092cb057]
	I0819 11:20:29.467495   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:20:29.480337   14497 logs.go:276] 1 containers: [2c90137ecacc]
	I0819 11:20:29.480415   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:20:29.491272   14497 logs.go:276] 2 containers: [27115d75bfca d8442dadb356]
	I0819 11:20:29.491345   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:20:29.501784   14497 logs.go:276] 1 containers: [a32341ff1eda]
	I0819 11:20:29.501845   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:20:29.512053   14497 logs.go:276] 2 containers: [8908fc229d49 f94b194fc3ad]
	I0819 11:20:29.512124   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:20:29.522219   14497 logs.go:276] 0 containers: []
	W0819 11:20:29.522229   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:20:29.522286   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:20:29.539244   14497 logs.go:276] 2 containers: [2338601903cd 0e3b67602bd8]
	I0819 11:20:29.539262   14497 logs.go:123] Gathering logs for etcd [89cb092cb057] ...
	I0819 11:20:29.539267   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cb092cb057"
	I0819 11:20:29.553726   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:20:29.553738   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:20:29.565673   14497 logs.go:123] Gathering logs for coredns [2c90137ecacc] ...
	I0819 11:20:29.565684   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c90137ecacc"
	I0819 11:20:29.577844   14497 logs.go:123] Gathering logs for kube-scheduler [27115d75bfca] ...
	I0819 11:20:29.577858   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27115d75bfca"
	I0819 11:20:29.589847   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:20:29.589858   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:20:29.594546   14497 logs.go:123] Gathering logs for etcd [0b6d1c937b20] ...
	I0819 11:20:29.594555   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b6d1c937b20"
	I0819 11:20:29.609130   14497 logs.go:123] Gathering logs for kube-apiserver [73b6ea415881] ...
	I0819 11:20:29.609140   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73b6ea415881"
	I0819 11:20:29.621492   14497 logs.go:123] Gathering logs for kube-scheduler [d8442dadb356] ...
	I0819 11:20:29.621502   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8442dadb356"
	I0819 11:20:29.633360   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:20:29.633371   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:20:29.659488   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:20:29.659502   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:20:29.696973   14497 logs.go:123] Gathering logs for kube-apiserver [c75ea31785de] ...
	I0819 11:20:29.696985   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75ea31785de"
	I0819 11:20:29.710853   14497 logs.go:123] Gathering logs for kube-controller-manager [8908fc229d49] ...
	I0819 11:20:29.710864   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8908fc229d49"
	I0819 11:20:29.728079   14497 logs.go:123] Gathering logs for kube-controller-manager [f94b194fc3ad] ...
	I0819 11:20:29.728091   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94b194fc3ad"
	I0819 11:20:29.739040   14497 logs.go:123] Gathering logs for storage-provisioner [2338601903cd] ...
	I0819 11:20:29.739051   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2338601903cd"
	I0819 11:20:29.750661   14497 logs.go:123] Gathering logs for storage-provisioner [0e3b67602bd8] ...
	I0819 11:20:29.750672   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3b67602bd8"
	I0819 11:20:29.762568   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:20:29.762582   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:20:29.802926   14497 logs.go:123] Gathering logs for kube-proxy [a32341ff1eda] ...
	I0819 11:20:29.802937   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32341ff1eda"
	I0819 11:20:32.316975   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:20:37.319710   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:20:37.320101   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:20:37.357210   14497 logs.go:276] 2 containers: [c75ea31785de 73b6ea415881]
	I0819 11:20:37.357342   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:20:37.377453   14497 logs.go:276] 2 containers: [0b6d1c937b20 89cb092cb057]
	I0819 11:20:37.377570   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:20:37.392277   14497 logs.go:276] 1 containers: [2c90137ecacc]
	I0819 11:20:37.392348   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:20:37.406554   14497 logs.go:276] 2 containers: [27115d75bfca d8442dadb356]
	I0819 11:20:37.406630   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:20:37.419197   14497 logs.go:276] 1 containers: [a32341ff1eda]
	I0819 11:20:37.419264   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:20:37.430090   14497 logs.go:276] 2 containers: [8908fc229d49 f94b194fc3ad]
	I0819 11:20:37.430157   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:20:37.440562   14497 logs.go:276] 0 containers: []
	W0819 11:20:37.440575   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:20:37.440638   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:20:37.451434   14497 logs.go:276] 2 containers: [2338601903cd 0e3b67602bd8]
	I0819 11:20:37.451454   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:20:37.451460   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:20:37.456016   14497 logs.go:123] Gathering logs for kube-apiserver [c75ea31785de] ...
	I0819 11:20:37.456022   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75ea31785de"
	I0819 11:20:37.470248   14497 logs.go:123] Gathering logs for kube-apiserver [73b6ea415881] ...
	I0819 11:20:37.470257   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73b6ea415881"
	I0819 11:20:37.483343   14497 logs.go:123] Gathering logs for storage-provisioner [2338601903cd] ...
	I0819 11:20:37.483356   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2338601903cd"
	I0819 11:20:37.500014   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:20:37.500025   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:20:37.536010   14497 logs.go:123] Gathering logs for kube-scheduler [d8442dadb356] ...
	I0819 11:20:37.536018   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8442dadb356"
	I0819 11:20:37.551696   14497 logs.go:123] Gathering logs for kube-controller-manager [8908fc229d49] ...
	I0819 11:20:37.551709   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8908fc229d49"
	I0819 11:20:37.574410   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:20:37.574421   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:20:37.597469   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:20:37.597476   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:20:37.608981   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:20:37.608991   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:20:37.644100   14497 logs.go:123] Gathering logs for etcd [0b6d1c937b20] ...
	I0819 11:20:37.644115   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b6d1c937b20"
	I0819 11:20:37.662008   14497 logs.go:123] Gathering logs for kube-proxy [a32341ff1eda] ...
	I0819 11:20:37.662018   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32341ff1eda"
	I0819 11:20:37.674004   14497 logs.go:123] Gathering logs for etcd [89cb092cb057] ...
	I0819 11:20:37.674014   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cb092cb057"
	I0819 11:20:37.688981   14497 logs.go:123] Gathering logs for coredns [2c90137ecacc] ...
	I0819 11:20:37.688992   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c90137ecacc"
	I0819 11:20:37.704437   14497 logs.go:123] Gathering logs for kube-scheduler [27115d75bfca] ...
	I0819 11:20:37.704447   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27115d75bfca"
	I0819 11:20:37.716447   14497 logs.go:123] Gathering logs for kube-controller-manager [f94b194fc3ad] ...
	I0819 11:20:37.716456   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94b194fc3ad"
	I0819 11:20:37.736291   14497 logs.go:123] Gathering logs for storage-provisioner [0e3b67602bd8] ...
	I0819 11:20:37.736301   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3b67602bd8"
	I0819 11:20:40.249228   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:20:45.251647   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:20:45.251740   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:20:45.262456   14497 logs.go:276] 2 containers: [c75ea31785de 73b6ea415881]
	I0819 11:20:45.262517   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:20:45.273326   14497 logs.go:276] 2 containers: [0b6d1c937b20 89cb092cb057]
	I0819 11:20:45.273396   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:20:45.283984   14497 logs.go:276] 1 containers: [2c90137ecacc]
	I0819 11:20:45.284057   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:20:45.294562   14497 logs.go:276] 2 containers: [27115d75bfca d8442dadb356]
	I0819 11:20:45.294629   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:20:45.305462   14497 logs.go:276] 1 containers: [a32341ff1eda]
	I0819 11:20:45.305541   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:20:45.317065   14497 logs.go:276] 2 containers: [8908fc229d49 f94b194fc3ad]
	I0819 11:20:45.317149   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:20:45.328057   14497 logs.go:276] 0 containers: []
	W0819 11:20:45.328069   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:20:45.328123   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:20:45.338631   14497 logs.go:276] 2 containers: [2338601903cd 0e3b67602bd8]
	I0819 11:20:45.338650   14497 logs.go:123] Gathering logs for kube-apiserver [73b6ea415881] ...
	I0819 11:20:45.338655   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73b6ea415881"
	I0819 11:20:45.352819   14497 logs.go:123] Gathering logs for etcd [0b6d1c937b20] ...
	I0819 11:20:45.352833   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b6d1c937b20"
	I0819 11:20:45.370218   14497 logs.go:123] Gathering logs for etcd [89cb092cb057] ...
	I0819 11:20:45.370229   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cb092cb057"
	I0819 11:20:45.384998   14497 logs.go:123] Gathering logs for kube-scheduler [27115d75bfca] ...
	I0819 11:20:45.385008   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27115d75bfca"
	I0819 11:20:45.414381   14497 logs.go:123] Gathering logs for kube-controller-manager [8908fc229d49] ...
	I0819 11:20:45.414394   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8908fc229d49"
	I0819 11:20:45.432914   14497 logs.go:123] Gathering logs for kube-controller-manager [f94b194fc3ad] ...
	I0819 11:20:45.432926   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94b194fc3ad"
	I0819 11:20:45.444204   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:20:45.444217   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:20:45.481980   14497 logs.go:123] Gathering logs for kube-apiserver [c75ea31785de] ...
	I0819 11:20:45.481988   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75ea31785de"
	I0819 11:20:45.496101   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:20:45.496112   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:20:45.520689   14497 logs.go:123] Gathering logs for kube-proxy [a32341ff1eda] ...
	I0819 11:20:45.520697   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32341ff1eda"
	I0819 11:20:45.532048   14497 logs.go:123] Gathering logs for coredns [2c90137ecacc] ...
	I0819 11:20:45.532059   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c90137ecacc"
	I0819 11:20:45.543198   14497 logs.go:123] Gathering logs for kube-scheduler [d8442dadb356] ...
	I0819 11:20:45.543212   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8442dadb356"
	I0819 11:20:45.555385   14497 logs.go:123] Gathering logs for storage-provisioner [2338601903cd] ...
	I0819 11:20:45.555394   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2338601903cd"
	I0819 11:20:45.566708   14497 logs.go:123] Gathering logs for storage-provisioner [0e3b67602bd8] ...
	I0819 11:20:45.566719   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3b67602bd8"
	I0819 11:20:45.578222   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:20:45.578234   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:20:45.590304   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:20:45.590314   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:20:45.594539   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:20:45.594549   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:20:48.132584   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:20:53.134928   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:20:53.135163   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:20:53.163423   14497 logs.go:276] 2 containers: [c75ea31785de 73b6ea415881]
	I0819 11:20:53.163546   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:20:53.204344   14497 logs.go:276] 2 containers: [0b6d1c937b20 89cb092cb057]
	I0819 11:20:53.204408   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:20:53.224263   14497 logs.go:276] 1 containers: [2c90137ecacc]
	I0819 11:20:53.224324   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:20:53.234757   14497 logs.go:276] 2 containers: [27115d75bfca d8442dadb356]
	I0819 11:20:53.234841   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:20:53.245637   14497 logs.go:276] 1 containers: [a32341ff1eda]
	I0819 11:20:53.245704   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:20:53.255966   14497 logs.go:276] 2 containers: [8908fc229d49 f94b194fc3ad]
	I0819 11:20:53.256022   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:20:53.265623   14497 logs.go:276] 0 containers: []
	W0819 11:20:53.265633   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:20:53.265680   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:20:53.282300   14497 logs.go:276] 2 containers: [2338601903cd 0e3b67602bd8]
	I0819 11:20:53.282319   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:20:53.282325   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:20:53.287010   14497 logs.go:123] Gathering logs for etcd [89cb092cb057] ...
	I0819 11:20:53.287017   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cb092cb057"
	I0819 11:20:53.301236   14497 logs.go:123] Gathering logs for kube-scheduler [27115d75bfca] ...
	I0819 11:20:53.301251   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27115d75bfca"
	I0819 11:20:53.316421   14497 logs.go:123] Gathering logs for storage-provisioner [2338601903cd] ...
	I0819 11:20:53.316434   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2338601903cd"
	I0819 11:20:53.328113   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:20:53.328125   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:20:53.365080   14497 logs.go:123] Gathering logs for kube-scheduler [d8442dadb356] ...
	I0819 11:20:53.365090   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8442dadb356"
	I0819 11:20:53.376890   14497 logs.go:123] Gathering logs for kube-proxy [a32341ff1eda] ...
	I0819 11:20:53.376904   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32341ff1eda"
	I0819 11:20:53.388632   14497 logs.go:123] Gathering logs for kube-controller-manager [8908fc229d49] ...
	I0819 11:20:53.388646   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8908fc229d49"
	I0819 11:20:53.418255   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:20:53.418265   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:20:53.442085   14497 logs.go:123] Gathering logs for coredns [2c90137ecacc] ...
	I0819 11:20:53.442093   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c90137ecacc"
	I0819 11:20:53.459535   14497 logs.go:123] Gathering logs for storage-provisioner [0e3b67602bd8] ...
	I0819 11:20:53.459546   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3b67602bd8"
	I0819 11:20:53.471265   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:20:53.471277   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:20:53.482726   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:20:53.482740   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:20:53.518154   14497 logs.go:123] Gathering logs for kube-apiserver [73b6ea415881] ...
	I0819 11:20:53.518164   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73b6ea415881"
	I0819 11:20:53.533674   14497 logs.go:123] Gathering logs for etcd [0b6d1c937b20] ...
	I0819 11:20:53.533689   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b6d1c937b20"
	I0819 11:20:53.547596   14497 logs.go:123] Gathering logs for kube-controller-manager [f94b194fc3ad] ...
	I0819 11:20:53.547610   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94b194fc3ad"
	I0819 11:20:53.560029   14497 logs.go:123] Gathering logs for kube-apiserver [c75ea31785de] ...
	I0819 11:20:53.560042   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75ea31785de"
	I0819 11:20:56.076689   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:21:01.079358   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:21:01.079513   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:21:01.095574   14497 logs.go:276] 2 containers: [c75ea31785de 73b6ea415881]
	I0819 11:21:01.095649   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:21:01.107734   14497 logs.go:276] 2 containers: [0b6d1c937b20 89cb092cb057]
	I0819 11:21:01.107809   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:21:01.118628   14497 logs.go:276] 1 containers: [2c90137ecacc]
	I0819 11:21:01.118693   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:21:01.129213   14497 logs.go:276] 2 containers: [27115d75bfca d8442dadb356]
	I0819 11:21:01.129277   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:21:01.139971   14497 logs.go:276] 1 containers: [a32341ff1eda]
	I0819 11:21:01.140031   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:21:01.150860   14497 logs.go:276] 2 containers: [8908fc229d49 f94b194fc3ad]
	I0819 11:21:01.150933   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:21:01.161269   14497 logs.go:276] 0 containers: []
	W0819 11:21:01.161280   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:21:01.161336   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:21:01.172032   14497 logs.go:276] 2 containers: [2338601903cd 0e3b67602bd8]
	I0819 11:21:01.172051   14497 logs.go:123] Gathering logs for kube-apiserver [73b6ea415881] ...
	I0819 11:21:01.172056   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73b6ea415881"
	I0819 11:21:01.184536   14497 logs.go:123] Gathering logs for kube-scheduler [d8442dadb356] ...
	I0819 11:21:01.184546   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8442dadb356"
	I0819 11:21:01.197133   14497 logs.go:123] Gathering logs for storage-provisioner [2338601903cd] ...
	I0819 11:21:01.197145   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2338601903cd"
	I0819 11:21:01.208832   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:21:01.208842   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:21:01.248379   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:21:01.248389   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:21:01.253000   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:21:01.253010   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:21:01.289452   14497 logs.go:123] Gathering logs for etcd [89cb092cb057] ...
	I0819 11:21:01.289465   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cb092cb057"
	I0819 11:21:01.303977   14497 logs.go:123] Gathering logs for kube-scheduler [27115d75bfca] ...
	I0819 11:21:01.303989   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27115d75bfca"
	I0819 11:21:01.315772   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:21:01.315781   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:21:01.327533   14497 logs.go:123] Gathering logs for etcd [0b6d1c937b20] ...
	I0819 11:21:01.327544   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b6d1c937b20"
	I0819 11:21:01.346362   14497 logs.go:123] Gathering logs for kube-controller-manager [8908fc229d49] ...
	I0819 11:21:01.346374   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8908fc229d49"
	I0819 11:21:01.367770   14497 logs.go:123] Gathering logs for kube-apiserver [c75ea31785de] ...
	I0819 11:21:01.367780   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75ea31785de"
	I0819 11:21:01.382527   14497 logs.go:123] Gathering logs for coredns [2c90137ecacc] ...
	I0819 11:21:01.382536   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c90137ecacc"
	I0819 11:21:01.394787   14497 logs.go:123] Gathering logs for kube-proxy [a32341ff1eda] ...
	I0819 11:21:01.394798   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32341ff1eda"
	I0819 11:21:01.410027   14497 logs.go:123] Gathering logs for kube-controller-manager [f94b194fc3ad] ...
	I0819 11:21:01.410037   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94b194fc3ad"
	I0819 11:21:01.421316   14497 logs.go:123] Gathering logs for storage-provisioner [0e3b67602bd8] ...
	I0819 11:21:01.421327   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3b67602bd8"
	I0819 11:21:01.433067   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:21:01.433078   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:21:03.960042   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:21:08.962382   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:21:08.962878   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:21:09.002578   14497 logs.go:276] 2 containers: [c75ea31785de 73b6ea415881]
	I0819 11:21:09.002712   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:21:09.024566   14497 logs.go:276] 2 containers: [0b6d1c937b20 89cb092cb057]
	I0819 11:21:09.024681   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:21:09.039963   14497 logs.go:276] 1 containers: [2c90137ecacc]
	I0819 11:21:09.040040   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:21:09.052905   14497 logs.go:276] 2 containers: [27115d75bfca d8442dadb356]
	I0819 11:21:09.052975   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:21:09.064168   14497 logs.go:276] 1 containers: [a32341ff1eda]
	I0819 11:21:09.064239   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:21:09.074632   14497 logs.go:276] 2 containers: [8908fc229d49 f94b194fc3ad]
	I0819 11:21:09.074695   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:21:09.088675   14497 logs.go:276] 0 containers: []
	W0819 11:21:09.088692   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:21:09.088753   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:21:09.099338   14497 logs.go:276] 2 containers: [2338601903cd 0e3b67602bd8]
	I0819 11:21:09.099353   14497 logs.go:123] Gathering logs for etcd [0b6d1c937b20] ...
	I0819 11:21:09.099359   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b6d1c937b20"
	I0819 11:21:09.113650   14497 logs.go:123] Gathering logs for kube-scheduler [27115d75bfca] ...
	I0819 11:21:09.113663   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27115d75bfca"
	I0819 11:21:09.125843   14497 logs.go:123] Gathering logs for kube-scheduler [d8442dadb356] ...
	I0819 11:21:09.125853   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8442dadb356"
	I0819 11:21:09.138536   14497 logs.go:123] Gathering logs for kube-proxy [a32341ff1eda] ...
	I0819 11:21:09.138545   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32341ff1eda"
	I0819 11:21:09.151251   14497 logs.go:123] Gathering logs for kube-controller-manager [8908fc229d49] ...
	I0819 11:21:09.151263   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8908fc229d49"
	I0819 11:21:09.169582   14497 logs.go:123] Gathering logs for storage-provisioner [0e3b67602bd8] ...
	I0819 11:21:09.169595   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3b67602bd8"
	I0819 11:21:09.182134   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:21:09.182150   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:21:09.186445   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:21:09.186451   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:21:09.222634   14497 logs.go:123] Gathering logs for storage-provisioner [2338601903cd] ...
	I0819 11:21:09.222643   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2338601903cd"
	I0819 11:21:09.240589   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:21:09.240601   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:21:09.253813   14497 logs.go:123] Gathering logs for kube-controller-manager [f94b194fc3ad] ...
	I0819 11:21:09.253826   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94b194fc3ad"
	I0819 11:21:09.266263   14497 logs.go:123] Gathering logs for kube-apiserver [c75ea31785de] ...
	I0819 11:21:09.266275   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75ea31785de"
	I0819 11:21:09.280739   14497 logs.go:123] Gathering logs for kube-apiserver [73b6ea415881] ...
	I0819 11:21:09.280747   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73b6ea415881"
	I0819 11:21:09.293154   14497 logs.go:123] Gathering logs for etcd [89cb092cb057] ...
	I0819 11:21:09.293166   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cb092cb057"
	I0819 11:21:09.307271   14497 logs.go:123] Gathering logs for coredns [2c90137ecacc] ...
	I0819 11:21:09.307284   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c90137ecacc"
	I0819 11:21:09.324653   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:21:09.324666   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:21:09.347295   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:21:09.347305   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:21:11.897947   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:21:16.898601   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:21:16.898710   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:21:16.910836   14497 logs.go:276] 2 containers: [c75ea31785de 73b6ea415881]
	I0819 11:21:16.910910   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:21:16.925280   14497 logs.go:276] 2 containers: [0b6d1c937b20 89cb092cb057]
	I0819 11:21:16.925359   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:21:16.936783   14497 logs.go:276] 1 containers: [2c90137ecacc]
	I0819 11:21:16.936865   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:21:16.948492   14497 logs.go:276] 2 containers: [27115d75bfca d8442dadb356]
	I0819 11:21:16.948564   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:21:16.960396   14497 logs.go:276] 1 containers: [a32341ff1eda]
	I0819 11:21:16.960474   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:21:16.971977   14497 logs.go:276] 2 containers: [8908fc229d49 f94b194fc3ad]
	I0819 11:21:16.972054   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:21:16.984014   14497 logs.go:276] 0 containers: []
	W0819 11:21:16.984029   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:21:16.984087   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:21:16.995347   14497 logs.go:276] 2 containers: [2338601903cd 0e3b67602bd8]
	I0819 11:21:16.995365   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:21:16.995371   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:21:17.035928   14497 logs.go:123] Gathering logs for kube-proxy [a32341ff1eda] ...
	I0819 11:21:17.035942   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32341ff1eda"
	I0819 11:21:17.049991   14497 logs.go:123] Gathering logs for kube-controller-manager [8908fc229d49] ...
	I0819 11:21:17.050003   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8908fc229d49"
	I0819 11:21:17.068817   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:21:17.068832   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:21:17.082442   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:21:17.082454   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:21:17.122908   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:21:17.122930   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:21:17.128126   14497 logs.go:123] Gathering logs for kube-apiserver [c75ea31785de] ...
	I0819 11:21:17.128135   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75ea31785de"
	I0819 11:21:17.143739   14497 logs.go:123] Gathering logs for etcd [89cb092cb057] ...
	I0819 11:21:17.143754   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cb092cb057"
	I0819 11:21:17.159757   14497 logs.go:123] Gathering logs for kube-scheduler [27115d75bfca] ...
	I0819 11:21:17.159770   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27115d75bfca"
	I0819 11:21:17.176909   14497 logs.go:123] Gathering logs for storage-provisioner [0e3b67602bd8] ...
	I0819 11:21:17.176923   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3b67602bd8"
	I0819 11:21:17.189372   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:21:17.189385   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:21:17.213177   14497 logs.go:123] Gathering logs for coredns [2c90137ecacc] ...
	I0819 11:21:17.213197   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c90137ecacc"
	I0819 11:21:17.228749   14497 logs.go:123] Gathering logs for kube-controller-manager [f94b194fc3ad] ...
	I0819 11:21:17.228761   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94b194fc3ad"
	I0819 11:21:17.242126   14497 logs.go:123] Gathering logs for storage-provisioner [2338601903cd] ...
	I0819 11:21:17.242138   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2338601903cd"
	I0819 11:21:17.255115   14497 logs.go:123] Gathering logs for kube-apiserver [73b6ea415881] ...
	I0819 11:21:17.255128   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73b6ea415881"
	I0819 11:21:17.273835   14497 logs.go:123] Gathering logs for etcd [0b6d1c937b20] ...
	I0819 11:21:17.273847   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b6d1c937b20"
	I0819 11:21:17.295332   14497 logs.go:123] Gathering logs for kube-scheduler [d8442dadb356] ...
	I0819 11:21:17.295349   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8442dadb356"
	I0819 11:21:19.811405   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:21:24.813662   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:21:24.813888   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:21:24.844769   14497 logs.go:276] 2 containers: [c75ea31785de 73b6ea415881]
	I0819 11:21:24.844893   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:21:24.868346   14497 logs.go:276] 2 containers: [0b6d1c937b20 89cb092cb057]
	I0819 11:21:24.868457   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:21:24.881485   14497 logs.go:276] 1 containers: [2c90137ecacc]
	I0819 11:21:24.881563   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:21:24.893393   14497 logs.go:276] 2 containers: [27115d75bfca d8442dadb356]
	I0819 11:21:24.893469   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:21:24.905744   14497 logs.go:276] 1 containers: [a32341ff1eda]
	I0819 11:21:24.905807   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:21:24.916577   14497 logs.go:276] 2 containers: [8908fc229d49 f94b194fc3ad]
	I0819 11:21:24.916647   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:21:24.926636   14497 logs.go:276] 0 containers: []
	W0819 11:21:24.926647   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:21:24.926704   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:21:24.937244   14497 logs.go:276] 2 containers: [2338601903cd 0e3b67602bd8]
	I0819 11:21:24.937265   14497 logs.go:123] Gathering logs for kube-scheduler [27115d75bfca] ...
	I0819 11:21:24.937270   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27115d75bfca"
	I0819 11:21:24.949102   14497 logs.go:123] Gathering logs for kube-controller-manager [8908fc229d49] ...
	I0819 11:21:24.949116   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8908fc229d49"
	I0819 11:21:24.966570   14497 logs.go:123] Gathering logs for kube-controller-manager [f94b194fc3ad] ...
	I0819 11:21:24.966584   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94b194fc3ad"
	I0819 11:21:24.977881   14497 logs.go:123] Gathering logs for storage-provisioner [2338601903cd] ...
	I0819 11:21:24.977895   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2338601903cd"
	I0819 11:21:24.989441   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:21:24.989454   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:21:24.993825   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:21:24.993835   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:21:25.029485   14497 logs.go:123] Gathering logs for kube-apiserver [c75ea31785de] ...
	I0819 11:21:25.029499   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75ea31785de"
	I0819 11:21:25.043600   14497 logs.go:123] Gathering logs for kube-apiserver [73b6ea415881] ...
	I0819 11:21:25.043614   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73b6ea415881"
	I0819 11:21:25.056475   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:21:25.056486   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:21:25.078673   14497 logs.go:123] Gathering logs for etcd [0b6d1c937b20] ...
	I0819 11:21:25.078681   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b6d1c937b20"
	I0819 11:21:25.092144   14497 logs.go:123] Gathering logs for kube-scheduler [d8442dadb356] ...
	I0819 11:21:25.092155   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8442dadb356"
	I0819 11:21:25.106017   14497 logs.go:123] Gathering logs for kube-proxy [a32341ff1eda] ...
	I0819 11:21:25.106027   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32341ff1eda"
	I0819 11:21:25.117690   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:21:25.117699   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:21:25.129641   14497 logs.go:123] Gathering logs for etcd [89cb092cb057] ...
	I0819 11:21:25.129651   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cb092cb057"
	I0819 11:21:25.147566   14497 logs.go:123] Gathering logs for storage-provisioner [0e3b67602bd8] ...
	I0819 11:21:25.147575   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3b67602bd8"
	I0819 11:21:25.158513   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:21:25.158525   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:21:25.194935   14497 logs.go:123] Gathering logs for coredns [2c90137ecacc] ...
	I0819 11:21:25.194942   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c90137ecacc"
	I0819 11:21:27.709011   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:21:32.709395   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:21:32.709486   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:21:32.720247   14497 logs.go:276] 2 containers: [c75ea31785de 73b6ea415881]
	I0819 11:21:32.720315   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:21:32.730936   14497 logs.go:276] 2 containers: [0b6d1c937b20 89cb092cb057]
	I0819 11:21:32.731000   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:21:32.742353   14497 logs.go:276] 1 containers: [2c90137ecacc]
	I0819 11:21:32.742418   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:21:32.753443   14497 logs.go:276] 2 containers: [27115d75bfca d8442dadb356]
	I0819 11:21:32.753511   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:21:32.763906   14497 logs.go:276] 1 containers: [a32341ff1eda]
	I0819 11:21:32.763970   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:21:32.774790   14497 logs.go:276] 2 containers: [8908fc229d49 f94b194fc3ad]
	I0819 11:21:32.774853   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:21:32.784573   14497 logs.go:276] 0 containers: []
	W0819 11:21:32.784584   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:21:32.784641   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:21:32.795166   14497 logs.go:276] 2 containers: [2338601903cd 0e3b67602bd8]
	I0819 11:21:32.795188   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:21:32.795196   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:21:32.829596   14497 logs.go:123] Gathering logs for kube-scheduler [27115d75bfca] ...
	I0819 11:21:32.829608   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27115d75bfca"
	I0819 11:21:32.843706   14497 logs.go:123] Gathering logs for kube-proxy [a32341ff1eda] ...
	I0819 11:21:32.843717   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32341ff1eda"
	I0819 11:21:32.857894   14497 logs.go:123] Gathering logs for storage-provisioner [0e3b67602bd8] ...
	I0819 11:21:32.857906   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3b67602bd8"
	I0819 11:21:32.869093   14497 logs.go:123] Gathering logs for kube-apiserver [73b6ea415881] ...
	I0819 11:21:32.869104   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73b6ea415881"
	I0819 11:21:32.889574   14497 logs.go:123] Gathering logs for etcd [89cb092cb057] ...
	I0819 11:21:32.889586   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cb092cb057"
	I0819 11:21:32.907563   14497 logs.go:123] Gathering logs for storage-provisioner [2338601903cd] ...
	I0819 11:21:32.907574   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2338601903cd"
	I0819 11:21:32.922908   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:21:32.922919   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:21:32.934653   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:21:32.934664   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:21:32.939211   14497 logs.go:123] Gathering logs for etcd [0b6d1c937b20] ...
	I0819 11:21:32.939219   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b6d1c937b20"
	I0819 11:21:32.953468   14497 logs.go:123] Gathering logs for coredns [2c90137ecacc] ...
	I0819 11:21:32.953479   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c90137ecacc"
	I0819 11:21:32.964910   14497 logs.go:123] Gathering logs for kube-scheduler [d8442dadb356] ...
	I0819 11:21:32.964921   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8442dadb356"
	I0819 11:21:32.976617   14497 logs.go:123] Gathering logs for kube-controller-manager [f94b194fc3ad] ...
	I0819 11:21:32.976630   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94b194fc3ad"
	I0819 11:21:32.987739   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:21:32.987749   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:21:33.023566   14497 logs.go:123] Gathering logs for kube-apiserver [c75ea31785de] ...
	I0819 11:21:33.023574   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75ea31785de"
	I0819 11:21:33.038105   14497 logs.go:123] Gathering logs for kube-controller-manager [8908fc229d49] ...
	I0819 11:21:33.038114   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8908fc229d49"
	I0819 11:21:33.056007   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:21:33.056020   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:21:35.580350   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:21:40.582627   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:21:40.582745   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:21:40.594750   14497 logs.go:276] 2 containers: [c75ea31785de 73b6ea415881]
	I0819 11:21:40.594820   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:21:40.605589   14497 logs.go:276] 2 containers: [0b6d1c937b20 89cb092cb057]
	I0819 11:21:40.605652   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:21:40.616658   14497 logs.go:276] 1 containers: [2c90137ecacc]
	I0819 11:21:40.616724   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:21:40.627098   14497 logs.go:276] 2 containers: [27115d75bfca d8442dadb356]
	I0819 11:21:40.627168   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:21:40.637578   14497 logs.go:276] 1 containers: [a32341ff1eda]
	I0819 11:21:40.637637   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:21:40.650071   14497 logs.go:276] 2 containers: [8908fc229d49 f94b194fc3ad]
	I0819 11:21:40.650132   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:21:40.660939   14497 logs.go:276] 0 containers: []
	W0819 11:21:40.660951   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:21:40.661006   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:21:40.672093   14497 logs.go:276] 2 containers: [2338601903cd 0e3b67602bd8]
	I0819 11:21:40.672112   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:21:40.672118   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:21:40.676637   14497 logs.go:123] Gathering logs for kube-apiserver [c75ea31785de] ...
	I0819 11:21:40.676644   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75ea31785de"
	I0819 11:21:40.691611   14497 logs.go:123] Gathering logs for kube-apiserver [73b6ea415881] ...
	I0819 11:21:40.691621   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73b6ea415881"
	I0819 11:21:40.704812   14497 logs.go:123] Gathering logs for kube-proxy [a32341ff1eda] ...
	I0819 11:21:40.704822   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32341ff1eda"
	I0819 11:21:40.716855   14497 logs.go:123] Gathering logs for storage-provisioner [2338601903cd] ...
	I0819 11:21:40.716871   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2338601903cd"
	I0819 11:21:40.729143   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:21:40.729157   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:21:40.753196   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:21:40.753211   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:21:40.791628   14497 logs.go:123] Gathering logs for coredns [2c90137ecacc] ...
	I0819 11:21:40.791650   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c90137ecacc"
	I0819 11:21:40.804160   14497 logs.go:123] Gathering logs for kube-scheduler [d8442dadb356] ...
	I0819 11:21:40.804172   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8442dadb356"
	I0819 11:21:40.817126   14497 logs.go:123] Gathering logs for kube-controller-manager [8908fc229d49] ...
	I0819 11:21:40.817142   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8908fc229d49"
	I0819 11:21:40.834673   14497 logs.go:123] Gathering logs for kube-controller-manager [f94b194fc3ad] ...
	I0819 11:21:40.834682   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94b194fc3ad"
	I0819 11:21:40.846781   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:21:40.846794   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:21:40.859629   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:21:40.859641   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:21:40.894750   14497 logs.go:123] Gathering logs for kube-scheduler [27115d75bfca] ...
	I0819 11:21:40.894760   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27115d75bfca"
	I0819 11:21:40.906415   14497 logs.go:123] Gathering logs for etcd [0b6d1c937b20] ...
	I0819 11:21:40.906425   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b6d1c937b20"
	I0819 11:21:40.920806   14497 logs.go:123] Gathering logs for storage-provisioner [0e3b67602bd8] ...
	I0819 11:21:40.920816   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3b67602bd8"
	I0819 11:21:40.931910   14497 logs.go:123] Gathering logs for etcd [89cb092cb057] ...
	I0819 11:21:40.931920   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cb092cb057"
	I0819 11:21:43.453012   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:21:48.455304   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:21:48.455588   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:21:48.484765   14497 logs.go:276] 2 containers: [c75ea31785de 73b6ea415881]
	I0819 11:21:48.484885   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:21:48.501582   14497 logs.go:276] 2 containers: [0b6d1c937b20 89cb092cb057]
	I0819 11:21:48.501679   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:21:48.515247   14497 logs.go:276] 1 containers: [2c90137ecacc]
	I0819 11:21:48.515326   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:21:48.526650   14497 logs.go:276] 2 containers: [27115d75bfca d8442dadb356]
	I0819 11:21:48.526707   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:21:48.537984   14497 logs.go:276] 1 containers: [a32341ff1eda]
	I0819 11:21:48.538064   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:21:48.548724   14497 logs.go:276] 2 containers: [8908fc229d49 f94b194fc3ad]
	I0819 11:21:48.548792   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:21:48.558691   14497 logs.go:276] 0 containers: []
	W0819 11:21:48.558704   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:21:48.558762   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:21:48.569560   14497 logs.go:276] 2 containers: [2338601903cd 0e3b67602bd8]
	I0819 11:21:48.569577   14497 logs.go:123] Gathering logs for etcd [89cb092cb057] ...
	I0819 11:21:48.569585   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cb092cb057"
	I0819 11:21:48.583788   14497 logs.go:123] Gathering logs for kube-controller-manager [8908fc229d49] ...
	I0819 11:21:48.583799   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8908fc229d49"
	I0819 11:21:48.601291   14497 logs.go:123] Gathering logs for storage-provisioner [2338601903cd] ...
	I0819 11:21:48.601300   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2338601903cd"
	I0819 11:21:48.613004   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:21:48.613014   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:21:48.635842   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:21:48.635851   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:21:48.639833   14497 logs.go:123] Gathering logs for kube-apiserver [c75ea31785de] ...
	I0819 11:21:48.639839   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75ea31785de"
	I0819 11:21:48.654227   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:21:48.654237   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:21:48.693487   14497 logs.go:123] Gathering logs for etcd [0b6d1c937b20] ...
	I0819 11:21:48.693497   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b6d1c937b20"
	I0819 11:21:48.718238   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:21:48.718248   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:21:48.735496   14497 logs.go:123] Gathering logs for kube-apiserver [73b6ea415881] ...
	I0819 11:21:48.735508   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73b6ea415881"
	I0819 11:21:48.748680   14497 logs.go:123] Gathering logs for kube-controller-manager [f94b194fc3ad] ...
	I0819 11:21:48.748690   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94b194fc3ad"
	I0819 11:21:48.768942   14497 logs.go:123] Gathering logs for kube-scheduler [27115d75bfca] ...
	I0819 11:21:48.768953   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27115d75bfca"
	I0819 11:21:48.780238   14497 logs.go:123] Gathering logs for kube-scheduler [d8442dadb356] ...
	I0819 11:21:48.780248   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8442dadb356"
	I0819 11:21:48.792040   14497 logs.go:123] Gathering logs for kube-proxy [a32341ff1eda] ...
	I0819 11:21:48.792051   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32341ff1eda"
	I0819 11:21:48.803794   14497 logs.go:123] Gathering logs for storage-provisioner [0e3b67602bd8] ...
	I0819 11:21:48.803804   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3b67602bd8"
	I0819 11:21:48.815863   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:21:48.815873   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:21:48.850081   14497 logs.go:123] Gathering logs for coredns [2c90137ecacc] ...
	I0819 11:21:48.850092   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c90137ecacc"
	I0819 11:21:51.364334   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:21:56.366640   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:21:56.366682   14497 kubeadm.go:597] duration metric: took 4m4.11502425s to restartPrimaryControlPlane
	W0819 11:21:56.366715   14497 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 11:21:56.366734   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0819 11:21:57.367902   14497 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.001162625s)
	I0819 11:21:57.367978   14497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:21:57.372990   14497 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 11:21:57.375738   14497 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 11:21:57.378575   14497 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 11:21:57.378581   14497 kubeadm.go:157] found existing configuration files:
	
	I0819 11:21:57.378601   14497 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52176 /etc/kubernetes/admin.conf
	I0819 11:21:57.381663   14497 kubeadm.go:163] "https://control-plane.minikube.internal:52176" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52176 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 11:21:57.381689   14497 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 11:21:57.384646   14497 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52176 /etc/kubernetes/kubelet.conf
	I0819 11:21:57.387053   14497 kubeadm.go:163] "https://control-plane.minikube.internal:52176" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52176 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 11:21:57.387078   14497 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 11:21:57.390142   14497 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52176 /etc/kubernetes/controller-manager.conf
	I0819 11:21:57.392983   14497 kubeadm.go:163] "https://control-plane.minikube.internal:52176" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52176 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 11:21:57.393008   14497 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 11:21:57.395446   14497 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52176 /etc/kubernetes/scheduler.conf
	I0819 11:21:57.398152   14497 kubeadm.go:163] "https://control-plane.minikube.internal:52176" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52176 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 11:21:57.398174   14497 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 11:21:57.401137   14497 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 11:21:57.418198   14497 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0819 11:21:57.418226   14497 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 11:21:57.466086   14497 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 11:21:57.466139   14497 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 11:21:57.466195   14497 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 11:21:57.516197   14497 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 11:21:57.519457   14497 out.go:235]   - Generating certificates and keys ...
	I0819 11:21:57.519490   14497 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 11:21:57.519524   14497 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 11:21:57.519566   14497 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 11:21:57.519595   14497 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 11:21:57.519630   14497 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 11:21:57.519662   14497 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 11:21:57.519696   14497 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 11:21:57.519733   14497 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 11:21:57.519773   14497 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 11:21:57.519813   14497 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 11:21:57.519832   14497 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 11:21:57.519858   14497 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 11:21:57.570399   14497 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 11:21:57.744048   14497 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 11:21:57.790797   14497 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 11:21:57.933781   14497 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 11:21:57.965589   14497 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 11:21:57.965945   14497 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 11:21:57.965966   14497 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 11:21:58.059744   14497 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 11:21:58.063391   14497 out.go:235]   - Booting up control plane ...
	I0819 11:21:58.063520   14497 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 11:21:58.063739   14497 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 11:21:58.063856   14497 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 11:21:58.063972   14497 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 11:21:58.064134   14497 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 11:22:02.561465   14497 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502627 seconds
	I0819 11:22:02.561625   14497 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 11:22:02.566803   14497 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 11:22:03.075989   14497 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 11:22:03.076094   14497 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-015000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 11:22:03.581352   14497 kubeadm.go:310] [bootstrap-token] Using token: fmn8m1.0hyitk70kfiab5le
	I0819 11:22:03.584008   14497 out.go:235]   - Configuring RBAC rules ...
	I0819 11:22:03.584074   14497 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 11:22:03.584144   14497 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 11:22:03.586506   14497 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 11:22:03.590586   14497 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 11:22:03.591462   14497 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 11:22:03.592967   14497 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 11:22:03.595917   14497 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 11:22:03.773901   14497 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 11:22:03.986893   14497 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 11:22:03.987250   14497 kubeadm.go:310] 
	I0819 11:22:03.987281   14497 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 11:22:03.987284   14497 kubeadm.go:310] 
	I0819 11:22:03.987321   14497 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 11:22:03.987323   14497 kubeadm.go:310] 
	I0819 11:22:03.987339   14497 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 11:22:03.987373   14497 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 11:22:03.987398   14497 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 11:22:03.987401   14497 kubeadm.go:310] 
	I0819 11:22:03.987494   14497 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 11:22:03.987499   14497 kubeadm.go:310] 
	I0819 11:22:03.987521   14497 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 11:22:03.987524   14497 kubeadm.go:310] 
	I0819 11:22:03.987550   14497 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 11:22:03.987594   14497 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 11:22:03.987667   14497 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 11:22:03.987671   14497 kubeadm.go:310] 
	I0819 11:22:03.987727   14497 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 11:22:03.987769   14497 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 11:22:03.987775   14497 kubeadm.go:310] 
	I0819 11:22:03.987824   14497 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fmn8m1.0hyitk70kfiab5le \
	I0819 11:22:03.987874   14497 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3f715a0124d50cfae4e4dfc474638f45f1ddd0476a0318801e6849c5425b2951 \
	I0819 11:22:03.987884   14497 kubeadm.go:310] 	--control-plane 
	I0819 11:22:03.987886   14497 kubeadm.go:310] 
	I0819 11:22:03.987925   14497 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 11:22:03.987927   14497 kubeadm.go:310] 
	I0819 11:22:03.987966   14497 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fmn8m1.0hyitk70kfiab5le \
	I0819 11:22:03.988018   14497 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3f715a0124d50cfae4e4dfc474638f45f1ddd0476a0318801e6849c5425b2951 
	I0819 11:22:03.988077   14497 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 11:22:03.988085   14497 cni.go:84] Creating CNI manager for ""
	I0819 11:22:03.988093   14497 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:22:03.991939   14497 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 11:22:03.999911   14497 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 11:22:04.002822   14497 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 11:22:04.007516   14497 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 11:22:04.007557   14497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:22:04.007577   14497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-015000 minikube.k8s.io/updated_at=2024_08_19T11_22_04_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411 minikube.k8s.io/name=running-upgrade-015000 minikube.k8s.io/primary=true
	I0819 11:22:04.048946   14497 ops.go:34] apiserver oom_adj: -16
	I0819 11:22:04.048946   14497 kubeadm.go:1113] duration metric: took 41.420417ms to wait for elevateKubeSystemPrivileges
	I0819 11:22:04.049042   14497 kubeadm.go:394] duration metric: took 4m11.811526917s to StartCluster
	I0819 11:22:04.049055   14497 settings.go:142] acquiring lock: {Name:mk15c923e9a2cce6164c6c5cc70f47fd16c4c208 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:22:04.049209   14497 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:22:04.049576   14497 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19468-11838/kubeconfig: {Name:mkf06e67426049c2259f6e46b5143872117d8aeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:22:04.049761   14497 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:22:04.049772   14497 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 11:22:04.049808   14497 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-015000"
	I0819 11:22:04.049820   14497 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-015000"
	W0819 11:22:04.049824   14497 addons.go:243] addon storage-provisioner should already be in state true
	I0819 11:22:04.049829   14497 config.go:182] Loaded profile config "running-upgrade-015000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 11:22:04.049836   14497 host.go:66] Checking if "running-upgrade-015000" exists ...
	I0819 11:22:04.049884   14497 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-015000"
	I0819 11:22:04.049897   14497 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-015000"
	I0819 11:22:04.050700   14497 kapi.go:59] client config for running-upgrade-015000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/running-upgrade-015000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/running-upgrade-015000/client.key", CAFile:"/Users/jenkins/minikube-integration/19468-11838/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103b1bd10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 11:22:04.050832   14497 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-015000"
	W0819 11:22:04.050838   14497 addons.go:243] addon default-storageclass should already be in state true
	I0819 11:22:04.050845   14497 host.go:66] Checking if "running-upgrade-015000" exists ...
	I0819 11:22:04.053885   14497 out.go:177] * Verifying Kubernetes components...
	I0819 11:22:04.054216   14497 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 11:22:04.058242   14497 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 11:22:04.058249   14497 sshutil.go:53] new ssh client: &{IP:localhost Port:52144 SSHKeyPath:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/running-upgrade-015000/id_rsa Username:docker}
	I0819 11:22:04.060852   14497 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:22:04.068045   14497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:22:04.068109   14497 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 11:22:04.068117   14497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 11:22:04.068121   14497 sshutil.go:53] new ssh client: &{IP:localhost Port:52144 SSHKeyPath:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/running-upgrade-015000/id_rsa Username:docker}
	I0819 11:22:04.151923   14497 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 11:22:04.157225   14497 api_server.go:52] waiting for apiserver process to appear ...
	I0819 11:22:04.157270   14497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:22:04.161307   14497 api_server.go:72] duration metric: took 111.5325ms to wait for apiserver process to appear ...
	I0819 11:22:04.161314   14497 api_server.go:88] waiting for apiserver healthz status ...
	I0819 11:22:04.161321   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:22:04.175020   14497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 11:22:04.253321   14497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 11:22:04.526774   14497 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 11:22:04.526785   14497 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 11:22:09.163391   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:22:09.163436   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:22:14.164178   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:22:14.164199   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:22:19.164585   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:22:19.164644   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:22:24.165252   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:22:24.165275   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:22:29.165922   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:22:29.165948   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:22:34.166819   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:22:34.166858   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0819 11:22:34.529157   14497 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0819 11:22:34.533615   14497 out.go:177] * Enabled addons: storage-provisioner
	I0819 11:22:34.545488   14497 addons.go:510] duration metric: took 30.495866166s for enable addons: enabled=[storage-provisioner]
	I0819 11:22:39.168052   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:22:39.168097   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:22:44.169613   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:22:44.169662   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:22:49.171557   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:22:49.171598   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:22:54.173858   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:22:54.173893   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:22:59.176150   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:22:59.176198   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:23:04.178502   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:23:04.178651   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:23:04.190171   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:23:04.190246   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:23:04.228321   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:23:04.228398   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:23:04.239392   14497 logs.go:276] 2 containers: [61d0ef3d0f03 c6b78cd6ea44]
	I0819 11:23:04.239467   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:23:04.249657   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:23:04.249720   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:23:04.259739   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:23:04.259807   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:23:04.273429   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:23:04.273503   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:23:04.283620   14497 logs.go:276] 0 containers: []
	W0819 11:23:04.283631   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:23:04.283690   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:23:04.296910   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:23:04.296928   14497 logs.go:123] Gathering logs for coredns [c6b78cd6ea44] ...
	I0819 11:23:04.296934   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6b78cd6ea44"
	I0819 11:23:04.308541   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:23:04.308552   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:23:04.322760   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:23:04.322772   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:23:04.334738   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:23:04.334748   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:23:04.352516   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:23:04.352526   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:23:04.387436   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:23:04.387448   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:23:04.401850   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:23:04.401863   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:23:04.415183   14497 logs.go:123] Gathering logs for coredns [61d0ef3d0f03] ...
	I0819 11:23:04.415197   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d0ef3d0f03"
	I0819 11:23:04.426591   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:23:04.426603   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:23:04.437702   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:23:04.437712   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:23:04.474554   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:23:04.474563   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:23:04.479732   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:23:04.479740   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:23:04.504092   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:23:04.504101   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:23:07.017767   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:23:12.020145   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:23:12.020321   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:23:12.041306   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:23:12.041403   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:23:12.056381   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:23:12.056460   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:23:12.069190   14497 logs.go:276] 2 containers: [61d0ef3d0f03 c6b78cd6ea44]
	I0819 11:23:12.069259   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:23:12.080050   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:23:12.080118   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:23:12.090768   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:23:12.090832   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:23:12.101318   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:23:12.101386   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:23:12.111169   14497 logs.go:276] 0 containers: []
	W0819 11:23:12.111180   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:23:12.111236   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:23:12.121961   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:23:12.121976   14497 logs.go:123] Gathering logs for coredns [c6b78cd6ea44] ...
	I0819 11:23:12.121981   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6b78cd6ea44"
	I0819 11:23:12.133761   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:23:12.133774   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:23:12.146172   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:23:12.146183   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:23:12.158765   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:23:12.158776   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:23:12.163535   14497 logs.go:123] Gathering logs for coredns [61d0ef3d0f03] ...
	I0819 11:23:12.163542   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d0ef3d0f03"
	I0819 11:23:12.177971   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:23:12.177981   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:23:12.192591   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:23:12.192601   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:23:12.207811   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:23:12.207825   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:23:12.222599   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:23:12.222610   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:23:12.240023   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:23:12.240035   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:23:12.252940   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:23:12.252950   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:23:12.279926   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:23:12.279936   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:23:12.315820   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:23:12.315828   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:23:14.852421   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:23:19.855039   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:23:19.855235   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:23:19.875739   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:23:19.875819   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:23:19.888875   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:23:19.888949   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:23:19.900513   14497 logs.go:276] 2 containers: [61d0ef3d0f03 c6b78cd6ea44]
	I0819 11:23:19.900578   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:23:19.911607   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:23:19.911681   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:23:19.922031   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:23:19.922095   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:23:19.932572   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:23:19.932642   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:23:19.950801   14497 logs.go:276] 0 containers: []
	W0819 11:23:19.950812   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:23:19.950873   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:23:19.960705   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:23:19.960723   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:23:19.960729   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:23:19.972396   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:23:19.972407   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:23:19.995396   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:23:19.995406   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:23:19.999916   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:23:19.999923   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:23:20.035619   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:23:20.035630   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:23:20.049786   14497 logs.go:123] Gathering logs for coredns [c6b78cd6ea44] ...
	I0819 11:23:20.049796   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6b78cd6ea44"
	I0819 11:23:20.061275   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:23:20.061289   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:23:20.075570   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:23:20.075585   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:23:20.092958   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:23:20.092970   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:23:20.105770   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:23:20.105784   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:23:20.145379   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:23:20.145392   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:23:20.159574   14497 logs.go:123] Gathering logs for coredns [61d0ef3d0f03] ...
	I0819 11:23:20.159589   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d0ef3d0f03"
	I0819 11:23:20.170948   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:23:20.170960   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:23:22.687553   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:23:27.689444   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:23:27.689599   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:23:27.703762   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:23:27.703841   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:23:27.716961   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:23:27.717028   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:23:27.728381   14497 logs.go:276] 2 containers: [61d0ef3d0f03 c6b78cd6ea44]
	I0819 11:23:27.728446   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:23:27.738999   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:23:27.739065   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:23:27.749947   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:23:27.750014   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:23:27.760313   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:23:27.760379   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:23:27.769962   14497 logs.go:276] 0 containers: []
	W0819 11:23:27.769973   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:23:27.770030   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:23:27.780005   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:23:27.780020   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:23:27.780025   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:23:27.795923   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:23:27.795933   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:23:27.808043   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:23:27.808053   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:23:27.825964   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:23:27.825978   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:23:27.844566   14497 logs.go:123] Gathering logs for coredns [61d0ef3d0f03] ...
	I0819 11:23:27.844576   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d0ef3d0f03"
	I0819 11:23:27.857908   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:23:27.857919   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:23:27.900979   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:23:27.900994   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:23:27.915102   14497 logs.go:123] Gathering logs for coredns [c6b78cd6ea44] ...
	I0819 11:23:27.915112   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6b78cd6ea44"
	I0819 11:23:27.926378   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:23:27.926391   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:23:27.938099   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:23:27.938113   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:23:27.961545   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:23:27.961556   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:23:27.972734   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:23:27.972745   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:23:28.008489   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:23:28.008499   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:23:30.514950   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:23:35.517255   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:23:35.517434   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:23:35.534507   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:23:35.534604   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:23:35.548381   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:23:35.548454   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:23:35.560285   14497 logs.go:276] 2 containers: [61d0ef3d0f03 c6b78cd6ea44]
	I0819 11:23:35.560357   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:23:35.570696   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:23:35.570760   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:23:35.583144   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:23:35.583211   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:23:35.595077   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:23:35.595141   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:23:35.605098   14497 logs.go:276] 0 containers: []
	W0819 11:23:35.605110   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:23:35.605168   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:23:35.615398   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:23:35.615412   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:23:35.615418   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:23:35.658981   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:23:35.658993   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:23:35.673735   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:23:35.673745   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:23:35.689475   14497 logs.go:123] Gathering logs for coredns [c6b78cd6ea44] ...
	I0819 11:23:35.689486   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6b78cd6ea44"
	I0819 11:23:35.701328   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:23:35.701339   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:23:35.719914   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:23:35.719925   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:23:35.731596   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:23:35.731607   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:23:35.749524   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:23:35.749533   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:23:35.761460   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:23:35.761477   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:23:35.799444   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:23:35.799457   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:23:35.804277   14497 logs.go:123] Gathering logs for coredns [61d0ef3d0f03] ...
	I0819 11:23:35.804287   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d0ef3d0f03"
	I0819 11:23:35.816192   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:23:35.816205   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:23:35.829409   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:23:35.829420   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:23:38.355588   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:23:43.357964   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:23:43.358061   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:23:43.371745   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:23:43.371819   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:23:43.383242   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:23:43.383313   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:23:43.394210   14497 logs.go:276] 2 containers: [61d0ef3d0f03 c6b78cd6ea44]
	I0819 11:23:43.394284   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:23:43.404922   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:23:43.404990   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:23:43.417228   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:23:43.417300   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:23:43.430177   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:23:43.430237   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:23:43.440306   14497 logs.go:276] 0 containers: []
	W0819 11:23:43.440316   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:23:43.440371   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:23:43.450861   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:23:43.450883   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:23:43.450890   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:23:43.476065   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:23:43.476076   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:23:43.487207   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:23:43.487224   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:23:43.524968   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:23:43.524979   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:23:43.539352   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:23:43.539362   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:23:43.552611   14497 logs.go:123] Gathering logs for coredns [61d0ef3d0f03] ...
	I0819 11:23:43.552622   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d0ef3d0f03"
	I0819 11:23:43.564587   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:23:43.564598   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:23:43.576564   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:23:43.576576   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:23:43.595000   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:23:43.595011   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:23:43.615224   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:23:43.615237   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:23:43.652907   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:23:43.652918   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:23:43.657754   14497 logs.go:123] Gathering logs for coredns [c6b78cd6ea44] ...
	I0819 11:23:43.657761   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6b78cd6ea44"
	I0819 11:23:43.670123   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:23:43.670134   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:23:46.193921   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:23:51.196185   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:23:51.196312   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:23:51.208629   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:23:51.208702   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:23:51.219282   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:23:51.219355   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:23:51.230063   14497 logs.go:276] 2 containers: [61d0ef3d0f03 c6b78cd6ea44]
	I0819 11:23:51.230126   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:23:51.246680   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:23:51.246751   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:23:51.257603   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:23:51.257668   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:23:51.268222   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:23:51.268280   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:23:51.278129   14497 logs.go:276] 0 containers: []
	W0819 11:23:51.278141   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:23:51.278197   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:23:51.288803   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:23:51.288816   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:23:51.288821   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:23:51.306537   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:23:51.306548   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:23:51.330314   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:23:51.330322   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:23:51.342256   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:23:51.342267   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:23:51.347035   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:23:51.347041   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:23:51.381915   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:23:51.381925   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:23:51.395738   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:23:51.395748   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:23:51.407572   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:23:51.407582   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:23:51.421288   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:23:51.421298   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:23:51.434224   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:23:51.434236   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:23:51.472697   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:23:51.472705   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:23:51.488691   14497 logs.go:123] Gathering logs for coredns [61d0ef3d0f03] ...
	I0819 11:23:51.488701   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d0ef3d0f03"
	I0819 11:23:51.500632   14497 logs.go:123] Gathering logs for coredns [c6b78cd6ea44] ...
	I0819 11:23:51.500642   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6b78cd6ea44"
	I0819 11:23:54.013813   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:23:59.016074   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:23:59.016179   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:23:59.027090   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:23:59.027154   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:23:59.041775   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:23:59.041839   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:23:59.057012   14497 logs.go:276] 2 containers: [61d0ef3d0f03 c6b78cd6ea44]
	I0819 11:23:59.057080   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:23:59.067370   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:23:59.067434   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:23:59.077774   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:23:59.077838   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:23:59.090281   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:23:59.090345   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:23:59.100217   14497 logs.go:276] 0 containers: []
	W0819 11:23:59.100227   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:23:59.100275   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:23:59.110492   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:23:59.110505   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:23:59.110511   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:23:59.122818   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:23:59.122832   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:23:59.160133   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:23:59.160143   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:23:59.178108   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:23:59.178122   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:23:59.192862   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:23:59.192872   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:23:59.207003   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:23:59.207015   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:23:59.218583   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:23:59.218597   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:23:59.236285   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:23:59.236299   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:23:59.259365   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:23:59.259374   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:23:59.264358   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:23:59.264365   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:23:59.301870   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:23:59.301880   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:23:59.316484   14497 logs.go:123] Gathering logs for coredns [61d0ef3d0f03] ...
	I0819 11:23:59.316499   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d0ef3d0f03"
	I0819 11:23:59.327935   14497 logs.go:123] Gathering logs for coredns [c6b78cd6ea44] ...
	I0819 11:23:59.327949   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6b78cd6ea44"
	I0819 11:24:01.847552   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:24:06.849755   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:24:06.849855   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:24:06.861562   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:24:06.861634   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:24:06.872446   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:24:06.872521   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:24:06.883408   14497 logs.go:276] 4 containers: [b018f83efc45 31df3e5d6111 61d0ef3d0f03 c6b78cd6ea44]
	I0819 11:24:06.883493   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:24:06.899372   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:24:06.899441   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:24:06.910324   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:24:06.910389   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:24:06.926720   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:24:06.926793   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:24:06.936636   14497 logs.go:276] 0 containers: []
	W0819 11:24:06.936648   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:24:06.936707   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:24:06.947325   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:24:06.947342   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:24:06.947347   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:24:06.952260   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:24:06.952267   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:24:06.964160   14497 logs.go:123] Gathering logs for coredns [b018f83efc45] ...
	I0819 11:24:06.964172   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b018f83efc45"
	I0819 11:24:06.975637   14497 logs.go:123] Gathering logs for coredns [31df3e5d6111] ...
	I0819 11:24:06.975649   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31df3e5d6111"
	I0819 11:24:06.986962   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:24:06.986979   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:24:07.001697   14497 logs.go:123] Gathering logs for coredns [c6b78cd6ea44] ...
	I0819 11:24:07.001707   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6b78cd6ea44"
	I0819 11:24:07.017074   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:24:07.017087   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:24:07.029859   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:24:07.029872   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:24:07.046805   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:24:07.046815   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:24:07.070621   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:24:07.070632   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:24:07.082081   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:24:07.082091   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:24:07.118074   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:24:07.118084   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:24:07.132512   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:24:07.132525   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:24:07.146319   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:24:07.146330   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:24:07.182816   14497 logs.go:123] Gathering logs for coredns [61d0ef3d0f03] ...
	I0819 11:24:07.182830   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d0ef3d0f03"
	I0819 11:24:09.696838   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:24:14.699062   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:24:14.699155   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:24:14.710497   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:24:14.710566   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:24:14.721044   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:24:14.721112   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:24:14.731992   14497 logs.go:276] 4 containers: [b018f83efc45 31df3e5d6111 61d0ef3d0f03 c6b78cd6ea44]
	I0819 11:24:14.732059   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:24:14.742808   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:24:14.742877   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:24:14.753145   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:24:14.753203   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:24:14.763445   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:24:14.763507   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:24:14.773631   14497 logs.go:276] 0 containers: []
	W0819 11:24:14.773644   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:24:14.773702   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:24:14.785763   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:24:14.785778   14497 logs.go:123] Gathering logs for coredns [b018f83efc45] ...
	I0819 11:24:14.785783   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b018f83efc45"
	I0819 11:24:14.797377   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:24:14.797389   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:24:14.811843   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:24:14.811853   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:24:14.847816   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:24:14.847824   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:24:14.882593   14497 logs.go:123] Gathering logs for coredns [31df3e5d6111] ...
	I0819 11:24:14.882604   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31df3e5d6111"
	I0819 11:24:14.894626   14497 logs.go:123] Gathering logs for coredns [61d0ef3d0f03] ...
	I0819 11:24:14.894637   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d0ef3d0f03"
	I0819 11:24:14.906344   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:24:14.906354   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:24:14.935484   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:24:14.935499   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:24:14.950180   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:24:14.950191   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:24:14.968143   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:24:14.968156   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:24:14.984206   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:24:14.984217   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:24:14.988833   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:24:14.988840   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:24:15.002932   14497 logs.go:123] Gathering logs for coredns [c6b78cd6ea44] ...
	I0819 11:24:15.002943   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6b78cd6ea44"
	I0819 11:24:15.014813   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:24:15.014826   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:24:15.026927   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:24:15.026938   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:24:17.541225   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:24:22.543591   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:24:22.543678   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:24:22.555014   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:24:22.555084   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:24:22.567038   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:24:22.567109   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:24:22.578390   14497 logs.go:276] 4 containers: [b018f83efc45 31df3e5d6111 61d0ef3d0f03 c6b78cd6ea44]
	I0819 11:24:22.578466   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:24:22.588860   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:24:22.588927   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:24:22.599048   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:24:22.599114   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:24:22.609935   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:24:22.610002   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:24:22.620278   14497 logs.go:276] 0 containers: []
	W0819 11:24:22.620290   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:24:22.620345   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:24:22.630762   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:24:22.630777   14497 logs.go:123] Gathering logs for coredns [31df3e5d6111] ...
	I0819 11:24:22.630783   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31df3e5d6111"
	I0819 11:24:22.642352   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:24:22.642363   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:24:22.654443   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:24:22.654454   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:24:22.672411   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:24:22.672421   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:24:22.685118   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:24:22.685128   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:24:22.689567   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:24:22.689576   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:24:22.709155   14497 logs.go:123] Gathering logs for coredns [61d0ef3d0f03] ...
	I0819 11:24:22.709167   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d0ef3d0f03"
	I0819 11:24:22.724350   14497 logs.go:123] Gathering logs for coredns [c6b78cd6ea44] ...
	I0819 11:24:22.724361   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6b78cd6ea44"
	I0819 11:24:22.736023   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:24:22.736034   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:24:22.774951   14497 logs.go:123] Gathering logs for coredns [b018f83efc45] ...
	I0819 11:24:22.774961   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b018f83efc45"
	I0819 11:24:22.786650   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:24:22.786660   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:24:22.798579   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:24:22.798589   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:24:22.822520   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:24:22.822529   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:24:22.860249   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:24:22.860258   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:24:22.874973   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:24:22.874986   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:24:25.392002   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:24:30.394286   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:24:30.394399   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:24:30.406020   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:24:30.406099   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:24:30.417248   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:24:30.417320   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:24:30.429562   14497 logs.go:276] 4 containers: [b018f83efc45 31df3e5d6111 61d0ef3d0f03 c6b78cd6ea44]
	I0819 11:24:30.429636   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:24:30.440828   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:24:30.440901   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:24:30.452104   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:24:30.452178   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:24:30.463549   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:24:30.463615   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:24:30.479134   14497 logs.go:276] 0 containers: []
	W0819 11:24:30.479145   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:24:30.479202   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:24:30.490388   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:24:30.490403   14497 logs.go:123] Gathering logs for coredns [61d0ef3d0f03] ...
	I0819 11:24:30.490408   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d0ef3d0f03"
	I0819 11:24:30.501834   14497 logs.go:123] Gathering logs for coredns [c6b78cd6ea44] ...
	I0819 11:24:30.501843   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6b78cd6ea44"
	I0819 11:24:30.513493   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:24:30.513504   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:24:30.517974   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:24:30.517980   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:24:30.552178   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:24:30.552188   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:24:30.566198   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:24:30.566208   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:24:30.581962   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:24:30.581974   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:24:30.595997   14497 logs.go:123] Gathering logs for coredns [31df3e5d6111] ...
	I0819 11:24:30.596006   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31df3e5d6111"
	I0819 11:24:30.607436   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:24:30.607447   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:24:30.623024   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:24:30.623041   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:24:30.635295   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:24:30.635309   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:24:30.672981   14497 logs.go:123] Gathering logs for coredns [b018f83efc45] ...
	I0819 11:24:30.672992   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b018f83efc45"
	I0819 11:24:30.685435   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:24:30.685447   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:24:30.703950   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:24:30.703962   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:24:30.715942   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:24:30.715953   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:24:33.243139   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:24:38.245354   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:24:38.245451   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:24:38.257171   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:24:38.257246   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:24:38.269006   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:24:38.269078   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:24:38.280120   14497 logs.go:276] 4 containers: [b018f83efc45 31df3e5d6111 61d0ef3d0f03 c6b78cd6ea44]
	I0819 11:24:38.280191   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:24:38.291234   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:24:38.291307   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:24:38.303342   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:24:38.303412   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:24:38.314391   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:24:38.314463   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:24:38.326208   14497 logs.go:276] 0 containers: []
	W0819 11:24:38.326220   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:24:38.326282   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:24:38.337648   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:24:38.337666   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:24:38.337672   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:24:38.342557   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:24:38.342569   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:24:38.381195   14497 logs.go:123] Gathering logs for coredns [b018f83efc45] ...
	I0819 11:24:38.381208   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b018f83efc45"
	I0819 11:24:38.393758   14497 logs.go:123] Gathering logs for coredns [61d0ef3d0f03] ...
	I0819 11:24:38.393771   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d0ef3d0f03"
	I0819 11:24:38.406783   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:24:38.406793   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:24:38.433175   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:24:38.433185   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:24:38.444787   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:24:38.444798   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:24:38.458950   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:24:38.458960   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:24:38.471016   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:24:38.471025   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:24:38.510088   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:24:38.510096   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:24:38.531579   14497 logs.go:123] Gathering logs for coredns [31df3e5d6111] ...
	I0819 11:24:38.531593   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31df3e5d6111"
	I0819 11:24:38.544245   14497 logs.go:123] Gathering logs for coredns [c6b78cd6ea44] ...
	I0819 11:24:38.544256   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6b78cd6ea44"
	I0819 11:24:38.556013   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:24:38.556027   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:24:38.574601   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:24:38.574611   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:24:38.586127   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:24:38.586136   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:24:41.111404   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:24:46.112319   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:24:46.112397   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:24:46.123861   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:24:46.123937   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:24:46.141455   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:24:46.141528   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:24:46.152926   14497 logs.go:276] 4 containers: [b018f83efc45 31df3e5d6111 61d0ef3d0f03 c6b78cd6ea44]
	I0819 11:24:46.153003   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:24:46.164426   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:24:46.164499   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:24:46.177395   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:24:46.177462   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:24:46.189470   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:24:46.189554   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:24:46.201366   14497 logs.go:276] 0 containers: []
	W0819 11:24:46.201379   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:24:46.201445   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:24:46.218001   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:24:46.218020   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:24:46.218026   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:24:46.244108   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:24:46.244121   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:24:46.256951   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:24:46.256963   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:24:46.277503   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:24:46.277511   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:24:46.289976   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:24:46.289987   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:24:46.304958   14497 logs.go:123] Gathering logs for coredns [b018f83efc45] ...
	I0819 11:24:46.304969   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b018f83efc45"
	I0819 11:24:46.317704   14497 logs.go:123] Gathering logs for coredns [31df3e5d6111] ...
	I0819 11:24:46.317715   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31df3e5d6111"
	I0819 11:24:46.332497   14497 logs.go:123] Gathering logs for coredns [61d0ef3d0f03] ...
	I0819 11:24:46.332509   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d0ef3d0f03"
	I0819 11:24:46.345624   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:24:46.345635   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:24:46.357608   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:24:46.357621   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:24:46.375162   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:24:46.375172   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:24:46.380098   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:24:46.380105   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:24:46.415064   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:24:46.415075   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:24:46.429698   14497 logs.go:123] Gathering logs for coredns [c6b78cd6ea44] ...
	I0819 11:24:46.429708   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6b78cd6ea44"
	I0819 11:24:46.441651   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:24:46.441665   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:24:48.979546   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:24:53.981568   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:24:53.981655   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:24:53.993041   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:24:53.993112   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:24:54.004648   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:24:54.004735   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:24:54.018592   14497 logs.go:276] 4 containers: [b018f83efc45 31df3e5d6111 61d0ef3d0f03 c6b78cd6ea44]
	I0819 11:24:54.018667   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:24:54.029783   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:24:54.029850   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:24:54.049128   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:24:54.049195   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:24:54.065094   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:24:54.065166   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:24:54.076205   14497 logs.go:276] 0 containers: []
	W0819 11:24:54.076216   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:24:54.076277   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:24:54.097554   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:24:54.097571   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:24:54.097577   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:24:54.140306   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:24:54.140319   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:24:54.180502   14497 logs.go:123] Gathering logs for coredns [31df3e5d6111] ...
	I0819 11:24:54.180515   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31df3e5d6111"
	I0819 11:24:54.192537   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:24:54.192545   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:24:54.210178   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:24:54.210193   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:24:54.232096   14497 logs.go:123] Gathering logs for coredns [b018f83efc45] ...
	I0819 11:24:54.232108   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b018f83efc45"
	I0819 11:24:54.245312   14497 logs.go:123] Gathering logs for coredns [61d0ef3d0f03] ...
	I0819 11:24:54.245328   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d0ef3d0f03"
	I0819 11:24:54.258578   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:24:54.258596   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:24:54.278133   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:24:54.278141   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:24:54.290649   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:24:54.290661   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:24:54.296023   14497 logs.go:123] Gathering logs for coredns [c6b78cd6ea44] ...
	I0819 11:24:54.296031   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6b78cd6ea44"
	I0819 11:24:54.308342   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:24:54.308352   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:24:54.321479   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:24:54.321489   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:24:54.345790   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:24:54.345796   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:24:54.358300   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:24:54.358316   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:24:56.874289   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:25:01.876578   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:25:01.876643   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:25:01.894868   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:25:01.894943   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:25:01.906584   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:25:01.906653   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:25:01.918191   14497 logs.go:276] 4 containers: [b018f83efc45 31df3e5d6111 61d0ef3d0f03 c6b78cd6ea44]
	I0819 11:25:01.918264   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:25:01.930273   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:25:01.930338   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:25:01.941419   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:25:01.941484   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:25:01.952636   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:25:01.952705   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:25:01.964530   14497 logs.go:276] 0 containers: []
	W0819 11:25:01.964540   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:25:01.964600   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:25:01.980245   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:25:01.980262   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:25:01.980267   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:25:01.996061   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:25:01.996072   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:25:02.023222   14497 logs.go:123] Gathering logs for coredns [b018f83efc45] ...
	I0819 11:25:02.023237   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b018f83efc45"
	I0819 11:25:02.036967   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:25:02.036984   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:25:02.053018   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:25:02.053032   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:25:02.067155   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:25:02.067167   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:25:02.115784   14497 logs.go:123] Gathering logs for coredns [31df3e5d6111] ...
	I0819 11:25:02.115795   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31df3e5d6111"
	I0819 11:25:02.129564   14497 logs.go:123] Gathering logs for coredns [61d0ef3d0f03] ...
	I0819 11:25:02.129576   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d0ef3d0f03"
	I0819 11:25:02.142865   14497 logs.go:123] Gathering logs for coredns [c6b78cd6ea44] ...
	I0819 11:25:02.142879   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6b78cd6ea44"
	I0819 11:25:02.155691   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:25:02.155705   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:25:02.175122   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:25:02.175140   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:25:02.214480   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:25:02.214498   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:25:02.220230   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:25:02.220242   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:25:02.269176   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:25:02.269187   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:25:02.281656   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:25:02.281671   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:25:04.793525   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:25:09.795751   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:25:09.795985   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:25:09.818043   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:25:09.818135   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:25:09.833665   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:25:09.833747   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:25:09.847914   14497 logs.go:276] 4 containers: [b018f83efc45 31df3e5d6111 61d0ef3d0f03 c6b78cd6ea44]
	I0819 11:25:09.847987   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:25:09.861130   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:25:09.861199   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:25:09.883193   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:25:09.883246   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:25:09.895232   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:25:09.895299   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:25:09.906523   14497 logs.go:276] 0 containers: []
	W0819 11:25:09.906534   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:25:09.906593   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:25:09.917565   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:25:09.917581   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:25:09.917587   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:25:09.922358   14497 logs.go:123] Gathering logs for coredns [c6b78cd6ea44] ...
	I0819 11:25:09.922368   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6b78cd6ea44"
	I0819 11:25:09.934745   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:25:09.934761   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:25:09.953178   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:25:09.953190   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:25:09.968504   14497 logs.go:123] Gathering logs for coredns [61d0ef3d0f03] ...
	I0819 11:25:09.968516   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d0ef3d0f03"
	I0819 11:25:09.981193   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:25:09.981205   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:25:09.993485   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:25:09.993497   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:25:10.007243   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:25:10.007255   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:25:10.033213   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:25:10.033228   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:25:10.050073   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:25:10.050088   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:25:10.088111   14497 logs.go:123] Gathering logs for coredns [b018f83efc45] ...
	I0819 11:25:10.088143   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b018f83efc45"
	I0819 11:25:10.101725   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:25:10.101737   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:25:10.118249   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:25:10.118261   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:25:10.162783   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:25:10.162796   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:25:10.177852   14497 logs.go:123] Gathering logs for coredns [31df3e5d6111] ...
	I0819 11:25:10.177860   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31df3e5d6111"
	I0819 11:25:12.693067   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:25:17.695329   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:25:17.695597   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:25:17.720397   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:25:17.720502   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:25:17.736794   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:25:17.736874   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:25:17.750210   14497 logs.go:276] 4 containers: [b018f83efc45 31df3e5d6111 61d0ef3d0f03 c6b78cd6ea44]
	I0819 11:25:17.750288   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:25:17.761828   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:25:17.761880   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:25:17.773550   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:25:17.773612   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:25:17.785063   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:25:17.785133   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:25:17.796835   14497 logs.go:276] 0 containers: []
	W0819 11:25:17.796845   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:25:17.796895   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:25:17.808139   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:25:17.808157   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:25:17.808163   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:25:17.845604   14497 logs.go:123] Gathering logs for coredns [c6b78cd6ea44] ...
	I0819 11:25:17.845622   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6b78cd6ea44"
	I0819 11:25:17.869820   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:25:17.869832   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:25:17.884662   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:25:17.884671   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:25:17.905805   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:25:17.905816   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:25:17.917730   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:25:17.917740   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:25:17.943936   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:25:17.943951   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:25:17.983136   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:25:17.983146   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:25:18.005386   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:25:18.005397   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:25:18.019004   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:25:18.019015   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:25:18.023973   14497 logs.go:123] Gathering logs for coredns [b018f83efc45] ...
	I0819 11:25:18.023979   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b018f83efc45"
	I0819 11:25:18.037161   14497 logs.go:123] Gathering logs for coredns [31df3e5d6111] ...
	I0819 11:25:18.037174   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31df3e5d6111"
	I0819 11:25:18.050334   14497 logs.go:123] Gathering logs for coredns [61d0ef3d0f03] ...
	I0819 11:25:18.050346   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d0ef3d0f03"
	I0819 11:25:18.067172   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:25:18.067183   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:25:18.079718   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:25:18.079729   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:25:20.596839   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:25:25.599531   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:25:25.599797   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:25:25.627241   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:25:25.627347   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:25:25.645567   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:25:25.645648   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:25:25.660299   14497 logs.go:276] 4 containers: [b018f83efc45 31df3e5d6111 61d0ef3d0f03 c6b78cd6ea44]
	I0819 11:25:25.660373   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:25:25.672432   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:25:25.672521   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:25:25.684087   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:25:25.684155   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:25:25.696049   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:25:25.696116   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:25:25.714060   14497 logs.go:276] 0 containers: []
	W0819 11:25:25.714073   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:25:25.714133   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:25:25.725758   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:25:25.725778   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:25:25.725783   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:25:25.764882   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:25:25.764893   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:25:25.782451   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:25:25.782461   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:25:25.787323   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:25:25.787333   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:25:25.802765   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:25:25.802777   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:25:25.819267   14497 logs.go:123] Gathering logs for coredns [c6b78cd6ea44] ...
	I0819 11:25:25.819279   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6b78cd6ea44"
	I0819 11:25:25.832426   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:25:25.832439   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:25:25.844831   14497 logs.go:123] Gathering logs for coredns [61d0ef3d0f03] ...
	I0819 11:25:25.844844   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d0ef3d0f03"
	I0819 11:25:25.861497   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:25:25.861510   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:25:25.874246   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:25:25.874258   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:25:25.900144   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:25:25.900157   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:25:25.940987   14497 logs.go:123] Gathering logs for coredns [b018f83efc45] ...
	I0819 11:25:25.941004   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b018f83efc45"
	I0819 11:25:25.954273   14497 logs.go:123] Gathering logs for coredns [31df3e5d6111] ...
	I0819 11:25:25.954286   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31df3e5d6111"
	I0819 11:25:25.967883   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:25:25.967896   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:25:25.983305   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:25:25.983317   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:25:28.512023   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:25:33.514439   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:25:33.514559   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:25:33.527203   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:25:33.527288   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:25:33.539184   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:25:33.539269   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:25:33.552302   14497 logs.go:276] 4 containers: [b018f83efc45 31df3e5d6111 61d0ef3d0f03 c6b78cd6ea44]
	I0819 11:25:33.552395   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:25:33.563005   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:25:33.563076   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:25:33.573686   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:25:33.573774   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:25:33.585303   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:25:33.585383   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:25:33.597451   14497 logs.go:276] 0 containers: []
	W0819 11:25:33.597464   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:25:33.597546   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:25:33.608527   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:25:33.608551   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:25:33.608558   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:25:33.623114   14497 logs.go:123] Gathering logs for coredns [31df3e5d6111] ...
	I0819 11:25:33.623123   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31df3e5d6111"
	I0819 11:25:33.634897   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:25:33.634910   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:25:33.647118   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:25:33.647129   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:25:33.670667   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:25:33.670677   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:25:33.683717   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:25:33.683726   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:25:33.703715   14497 logs.go:123] Gathering logs for coredns [b018f83efc45] ...
	I0819 11:25:33.703730   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b018f83efc45"
	I0819 11:25:33.716494   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:25:33.716507   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:25:33.735636   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:25:33.735651   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:25:33.741107   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:25:33.741121   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:25:33.780293   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:25:33.780305   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:25:33.795783   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:25:33.795795   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:25:33.807991   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:25:33.808004   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:25:33.847496   14497 logs.go:123] Gathering logs for coredns [61d0ef3d0f03] ...
	I0819 11:25:33.847510   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d0ef3d0f03"
	I0819 11:25:33.861575   14497 logs.go:123] Gathering logs for coredns [c6b78cd6ea44] ...
	I0819 11:25:33.861589   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6b78cd6ea44"
	I0819 11:25:36.376532   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:25:41.378691   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:25:41.378811   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:25:41.395853   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:25:41.395950   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:25:41.406785   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:25:41.406851   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:25:41.417451   14497 logs.go:276] 4 containers: [b018f83efc45 31df3e5d6111 61d0ef3d0f03 c6b78cd6ea44]
	I0819 11:25:41.417520   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:25:41.428319   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:25:41.428392   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:25:41.441480   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:25:41.441547   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:25:41.452097   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:25:41.452166   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:25:41.462457   14497 logs.go:276] 0 containers: []
	W0819 11:25:41.462469   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:25:41.462523   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:25:41.473335   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:25:41.473352   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:25:41.473357   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:25:41.511338   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:25:41.511348   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:25:41.547917   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:25:41.547928   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:25:41.562159   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:25:41.562176   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:25:41.576891   14497 logs.go:123] Gathering logs for coredns [31df3e5d6111] ...
	I0819 11:25:41.576902   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31df3e5d6111"
	I0819 11:25:41.589284   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:25:41.589296   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:25:41.600733   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:25:41.600745   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:25:41.612650   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:25:41.612660   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:25:41.624387   14497 logs.go:123] Gathering logs for coredns [61d0ef3d0f03] ...
	I0819 11:25:41.624398   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d0ef3d0f03"
	I0819 11:25:41.640784   14497 logs.go:123] Gathering logs for coredns [c6b78cd6ea44] ...
	I0819 11:25:41.640795   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6b78cd6ea44"
	I0819 11:25:41.652980   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:25:41.652991   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:25:41.667363   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:25:41.667373   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:25:41.684667   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:25:41.684676   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:25:41.689382   14497 logs.go:123] Gathering logs for coredns [b018f83efc45] ...
	I0819 11:25:41.689392   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b018f83efc45"
	I0819 11:25:41.701293   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:25:41.701304   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:25:44.226492   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:25:49.228776   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:25:49.228943   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:25:49.239791   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:25:49.239869   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:25:49.250911   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:25:49.250981   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:25:49.261507   14497 logs.go:276] 4 containers: [b018f83efc45 31df3e5d6111 61d0ef3d0f03 c6b78cd6ea44]
	I0819 11:25:49.261578   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:25:49.272431   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:25:49.272496   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:25:49.282998   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:25:49.283059   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:25:49.293803   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:25:49.293871   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:25:49.305482   14497 logs.go:276] 0 containers: []
	W0819 11:25:49.305493   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:25:49.305556   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:25:49.316451   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:25:49.316468   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:25:49.316475   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:25:49.321175   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:25:49.321181   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:25:49.335368   14497 logs.go:123] Gathering logs for coredns [c6b78cd6ea44] ...
	I0819 11:25:49.335382   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6b78cd6ea44"
	I0819 11:25:49.347358   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:25:49.347372   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:25:49.359080   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:25:49.359093   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:25:49.376547   14497 logs.go:123] Gathering logs for coredns [b018f83efc45] ...
	I0819 11:25:49.376558   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b018f83efc45"
	I0819 11:25:49.388039   14497 logs.go:123] Gathering logs for coredns [31df3e5d6111] ...
	I0819 11:25:49.388049   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31df3e5d6111"
	I0819 11:25:49.400357   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:25:49.400369   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:25:49.414916   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:25:49.414928   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:25:49.428956   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:25:49.428968   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:25:49.440311   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:25:49.440325   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:25:49.463468   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:25:49.463476   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:25:49.475409   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:25:49.475422   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:25:49.512417   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:25:49.512426   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:25:49.547911   14497 logs.go:123] Gathering logs for coredns [61d0ef3d0f03] ...
	I0819 11:25:49.547924   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d0ef3d0f03"
	I0819 11:25:52.062550   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:25:57.064812   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:25:57.064975   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:25:57.080590   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:25:57.080667   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:25:57.093334   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:25:57.093401   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:25:57.104123   14497 logs.go:276] 4 containers: [76f4f96e3d14 33316aef9534 b018f83efc45 31df3e5d6111]
	I0819 11:25:57.104196   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:25:57.117523   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:25:57.117591   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:25:57.128449   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:25:57.128512   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:25:57.138842   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:25:57.138919   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:25:57.150677   14497 logs.go:276] 0 containers: []
	W0819 11:25:57.150688   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:25:57.150743   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:25:57.161554   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:25:57.161571   14497 logs.go:123] Gathering logs for coredns [76f4f96e3d14] ...
	I0819 11:25:57.161577   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f4f96e3d14"
	I0819 11:25:57.173131   14497 logs.go:123] Gathering logs for coredns [31df3e5d6111] ...
	I0819 11:25:57.173143   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31df3e5d6111"
	I0819 11:25:57.185279   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:25:57.185291   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:25:57.200468   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:25:57.200494   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:25:57.213607   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:25:57.213617   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:25:57.252672   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:25:57.252688   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:25:57.292289   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:25:57.292304   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:25:57.306526   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:25:57.306541   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:25:57.323892   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:25:57.323902   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:25:57.328345   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:25:57.328353   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:25:57.342901   14497 logs.go:123] Gathering logs for coredns [b018f83efc45] ...
	I0819 11:25:57.342912   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b018f83efc45"
	I0819 11:25:57.357281   14497 logs.go:123] Gathering logs for coredns [33316aef9534] ...
	I0819 11:25:57.357294   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33316aef9534"
	I0819 11:25:57.369036   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:25:57.369050   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:25:57.381144   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:25:57.381156   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:25:57.393142   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:25:57.393154   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:25:59.919589   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:26:04.921829   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:26:04.926374   14497 out.go:201] 
	W0819 11:26:04.930419   14497 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0819 11:26:04.930429   14497 out.go:270] * 
	* 
	W0819 11:26:04.931100   14497 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:26:04.942321   14497 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-015000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-08-19 11:26:05.038552 -0700 PDT m=+1234.299255043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-015000 -n running-upgrade-015000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-015000 -n running-upgrade-015000: exit status 2 (15.670577667s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-015000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-411000          | force-systemd-flag-411000 | jenkins | v1.33.1 | 19 Aug 24 11:16 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-809000              | force-systemd-env-809000  | jenkins | v1.33.1 | 19 Aug 24 11:16 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-809000           | force-systemd-env-809000  | jenkins | v1.33.1 | 19 Aug 24 11:16 PDT | 19 Aug 24 11:16 PDT |
	| start   | -p docker-flags-391000                | docker-flags-391000       | jenkins | v1.33.1 | 19 Aug 24 11:16 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-411000             | force-systemd-flag-411000 | jenkins | v1.33.1 | 19 Aug 24 11:16 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-411000          | force-systemd-flag-411000 | jenkins | v1.33.1 | 19 Aug 24 11:16 PDT | 19 Aug 24 11:16 PDT |
	| start   | -p cert-expiration-924000             | cert-expiration-924000    | jenkins | v1.33.1 | 19 Aug 24 11:16 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-391000 ssh               | docker-flags-391000       | jenkins | v1.33.1 | 19 Aug 24 11:16 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-391000 ssh               | docker-flags-391000       | jenkins | v1.33.1 | 19 Aug 24 11:16 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-391000                | docker-flags-391000       | jenkins | v1.33.1 | 19 Aug 24 11:16 PDT | 19 Aug 24 11:16 PDT |
	| start   | -p cert-options-225000                | cert-options-225000       | jenkins | v1.33.1 | 19 Aug 24 11:16 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-225000 ssh               | cert-options-225000       | jenkins | v1.33.1 | 19 Aug 24 11:16 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-225000 -- sudo        | cert-options-225000       | jenkins | v1.33.1 | 19 Aug 24 11:16 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-225000                | cert-options-225000       | jenkins | v1.33.1 | 19 Aug 24 11:16 PDT | 19 Aug 24 11:16 PDT |
	| start   | -p running-upgrade-015000             | minikube                  | jenkins | v1.26.0 | 19 Aug 24 11:16 PDT | 19 Aug 24 11:17 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-015000             | running-upgrade-015000    | jenkins | v1.33.1 | 19 Aug 24 11:17 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-924000             | cert-expiration-924000    | jenkins | v1.33.1 | 19 Aug 24 11:19 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-924000             | cert-expiration-924000    | jenkins | v1.33.1 | 19 Aug 24 11:19 PDT | 19 Aug 24 11:19 PDT |
	| start   | -p kubernetes-upgrade-611000          | kubernetes-upgrade-611000 | jenkins | v1.33.1 | 19 Aug 24 11:19 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-611000          | kubernetes-upgrade-611000 | jenkins | v1.33.1 | 19 Aug 24 11:19 PDT | 19 Aug 24 11:19 PDT |
	| start   | -p kubernetes-upgrade-611000          | kubernetes-upgrade-611000 | jenkins | v1.33.1 | 19 Aug 24 11:19 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-611000          | kubernetes-upgrade-611000 | jenkins | v1.33.1 | 19 Aug 24 11:19 PDT | 19 Aug 24 11:19 PDT |
	| start   | -p stopped-upgrade-163000             | minikube                  | jenkins | v1.26.0 | 19 Aug 24 11:19 PDT | 19 Aug 24 11:20 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-163000 stop           | minikube                  | jenkins | v1.26.0 | 19 Aug 24 11:20 PDT | 19 Aug 24 11:20 PDT |
	| start   | -p stopped-upgrade-163000             | stopped-upgrade-163000    | jenkins | v1.33.1 | 19 Aug 24 11:20 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 11:20:53
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 11:20:53.640698   14738 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:20:53.640841   14738 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:20:53.640848   14738 out.go:358] Setting ErrFile to fd 2...
	I0819 11:20:53.640851   14738 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:20:53.640981   14738 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:20:53.642075   14738 out.go:352] Setting JSON to false
	I0819 11:20:53.660190   14738 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6620,"bootTime":1724085033,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:20:53.660262   14738 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:20:53.665338   14738 out.go:177] * [stopped-upgrade-163000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:20:53.672222   14738 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 11:20:53.672271   14738 notify.go:220] Checking for updates...
	I0819 11:20:53.679355   14738 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:20:53.682247   14738 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:20:53.685359   14738 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:20:53.688365   14738 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	I0819 11:20:53.691333   14738 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:20:53.694565   14738 config.go:182] Loaded profile config "stopped-upgrade-163000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 11:20:53.698292   14738 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0819 11:20:53.701281   14738 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:20:53.705333   14738 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 11:20:53.711249   14738 start.go:297] selected driver: qemu2
	I0819 11:20:53.711255   14738 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-163000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52396 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-163000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0819 11:20:53.711325   14738 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:20:53.713803   14738 cni.go:84] Creating CNI manager for ""
	I0819 11:20:53.713820   14738 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:20:53.713838   14738 start.go:340] cluster config:
	{Name:stopped-upgrade-163000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52396 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-163000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0819 11:20:53.713890   14738 iso.go:125] acquiring lock: {Name:mk1182fa87ba49f1e009b3ded77c456c9e9e8e4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:20:53.721302   14738 out.go:177] * Starting "stopped-upgrade-163000" primary control-plane node in "stopped-upgrade-163000" cluster
	I0819 11:20:53.725291   14738 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0819 11:20:53.725324   14738 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0819 11:20:53.725338   14738 cache.go:56] Caching tarball of preloaded images
	I0819 11:20:53.725409   14738 preload.go:172] Found /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:20:53.725418   14738 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0819 11:20:53.725475   14738 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000/config.json ...
	I0819 11:20:53.725883   14738 start.go:360] acquireMachinesLock for stopped-upgrade-163000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:20:53.725917   14738 start.go:364] duration metric: took 29.167µs to acquireMachinesLock for "stopped-upgrade-163000"
	I0819 11:20:53.725927   14738 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:20:53.725933   14738 fix.go:54] fixHost starting: 
	I0819 11:20:53.726042   14738 fix.go:112] recreateIfNeeded on stopped-upgrade-163000: state=Stopped err=<nil>
	W0819 11:20:53.726051   14738 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:20:53.730341   14738 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-163000" ...
	I0819 11:20:53.134928   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:20:53.135163   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:20:53.163423   14497 logs.go:276] 2 containers: [c75ea31785de 73b6ea415881]
	I0819 11:20:53.163546   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:20:53.204344   14497 logs.go:276] 2 containers: [0b6d1c937b20 89cb092cb057]
	I0819 11:20:53.204408   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:20:53.224263   14497 logs.go:276] 1 containers: [2c90137ecacc]
	I0819 11:20:53.224324   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:20:53.234757   14497 logs.go:276] 2 containers: [27115d75bfca d8442dadb356]
	I0819 11:20:53.234841   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:20:53.245637   14497 logs.go:276] 1 containers: [a32341ff1eda]
	I0819 11:20:53.245704   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:20:53.255966   14497 logs.go:276] 2 containers: [8908fc229d49 f94b194fc3ad]
	I0819 11:20:53.256022   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:20:53.265623   14497 logs.go:276] 0 containers: []
	W0819 11:20:53.265633   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:20:53.265680   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:20:53.282300   14497 logs.go:276] 2 containers: [2338601903cd 0e3b67602bd8]
	I0819 11:20:53.282319   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:20:53.282325   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:20:53.287010   14497 logs.go:123] Gathering logs for etcd [89cb092cb057] ...
	I0819 11:20:53.287017   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cb092cb057"
	I0819 11:20:53.301236   14497 logs.go:123] Gathering logs for kube-scheduler [27115d75bfca] ...
	I0819 11:20:53.301251   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27115d75bfca"
	I0819 11:20:53.316421   14497 logs.go:123] Gathering logs for storage-provisioner [2338601903cd] ...
	I0819 11:20:53.316434   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2338601903cd"
	I0819 11:20:53.328113   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:20:53.328125   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:20:53.365080   14497 logs.go:123] Gathering logs for kube-scheduler [d8442dadb356] ...
	I0819 11:20:53.365090   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8442dadb356"
	I0819 11:20:53.376890   14497 logs.go:123] Gathering logs for kube-proxy [a32341ff1eda] ...
	I0819 11:20:53.376904   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32341ff1eda"
	I0819 11:20:53.388632   14497 logs.go:123] Gathering logs for kube-controller-manager [8908fc229d49] ...
	I0819 11:20:53.388646   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8908fc229d49"
	I0819 11:20:53.418255   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:20:53.418265   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:20:53.442085   14497 logs.go:123] Gathering logs for coredns [2c90137ecacc] ...
	I0819 11:20:53.442093   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c90137ecacc"
	I0819 11:20:53.459535   14497 logs.go:123] Gathering logs for storage-provisioner [0e3b67602bd8] ...
	I0819 11:20:53.459546   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3b67602bd8"
	I0819 11:20:53.471265   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:20:53.471277   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:20:53.482726   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:20:53.482740   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:20:53.518154   14497 logs.go:123] Gathering logs for kube-apiserver [73b6ea415881] ...
	I0819 11:20:53.518164   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73b6ea415881"
	I0819 11:20:53.533674   14497 logs.go:123] Gathering logs for etcd [0b6d1c937b20] ...
	I0819 11:20:53.533689   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b6d1c937b20"
	I0819 11:20:53.547596   14497 logs.go:123] Gathering logs for kube-controller-manager [f94b194fc3ad] ...
	I0819 11:20:53.547610   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94b194fc3ad"
	I0819 11:20:53.560029   14497 logs.go:123] Gathering logs for kube-apiserver [c75ea31785de] ...
	I0819 11:20:53.560042   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75ea31785de"
	I0819 11:20:56.076689   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:20:53.734309   14738 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:20:53.734376   14738 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/stopped-upgrade-163000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/stopped-upgrade-163000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/stopped-upgrade-163000/qemu.pid -nic user,model=virtio,hostfwd=tcp::52361-:22,hostfwd=tcp::52362-:2376,hostname=stopped-upgrade-163000 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/stopped-upgrade-163000/disk.qcow2
	I0819 11:20:53.780832   14738 main.go:141] libmachine: STDOUT: 
	I0819 11:20:53.780852   14738 main.go:141] libmachine: STDERR: 
	I0819 11:20:53.780858   14738 main.go:141] libmachine: Waiting for VM to start (ssh -p 52361 docker@127.0.0.1)...
	I0819 11:21:01.079358   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:21:01.079513   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:21:01.095574   14497 logs.go:276] 2 containers: [c75ea31785de 73b6ea415881]
	I0819 11:21:01.095649   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:21:01.107734   14497 logs.go:276] 2 containers: [0b6d1c937b20 89cb092cb057]
	I0819 11:21:01.107809   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:21:01.118628   14497 logs.go:276] 1 containers: [2c90137ecacc]
	I0819 11:21:01.118693   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:21:01.129213   14497 logs.go:276] 2 containers: [27115d75bfca d8442dadb356]
	I0819 11:21:01.129277   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:21:01.139971   14497 logs.go:276] 1 containers: [a32341ff1eda]
	I0819 11:21:01.140031   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:21:01.150860   14497 logs.go:276] 2 containers: [8908fc229d49 f94b194fc3ad]
	I0819 11:21:01.150933   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:21:01.161269   14497 logs.go:276] 0 containers: []
	W0819 11:21:01.161280   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:21:01.161336   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:21:01.172032   14497 logs.go:276] 2 containers: [2338601903cd 0e3b67602bd8]
	I0819 11:21:01.172051   14497 logs.go:123] Gathering logs for kube-apiserver [73b6ea415881] ...
	I0819 11:21:01.172056   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73b6ea415881"
	I0819 11:21:01.184536   14497 logs.go:123] Gathering logs for kube-scheduler [d8442dadb356] ...
	I0819 11:21:01.184546   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8442dadb356"
	I0819 11:21:01.197133   14497 logs.go:123] Gathering logs for storage-provisioner [2338601903cd] ...
	I0819 11:21:01.197145   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2338601903cd"
	I0819 11:21:01.208832   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:21:01.208842   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:21:01.248379   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:21:01.248389   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:21:01.253000   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:21:01.253010   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:21:01.289452   14497 logs.go:123] Gathering logs for etcd [89cb092cb057] ...
	I0819 11:21:01.289465   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cb092cb057"
	I0819 11:21:01.303977   14497 logs.go:123] Gathering logs for kube-scheduler [27115d75bfca] ...
	I0819 11:21:01.303989   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27115d75bfca"
	I0819 11:21:01.315772   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:21:01.315781   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:21:01.327533   14497 logs.go:123] Gathering logs for etcd [0b6d1c937b20] ...
	I0819 11:21:01.327544   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b6d1c937b20"
	I0819 11:21:01.346362   14497 logs.go:123] Gathering logs for kube-controller-manager [8908fc229d49] ...
	I0819 11:21:01.346374   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8908fc229d49"
	I0819 11:21:01.367770   14497 logs.go:123] Gathering logs for kube-apiserver [c75ea31785de] ...
	I0819 11:21:01.367780   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75ea31785de"
	I0819 11:21:01.382527   14497 logs.go:123] Gathering logs for coredns [2c90137ecacc] ...
	I0819 11:21:01.382536   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c90137ecacc"
	I0819 11:21:01.394787   14497 logs.go:123] Gathering logs for kube-proxy [a32341ff1eda] ...
	I0819 11:21:01.394798   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32341ff1eda"
	I0819 11:21:01.410027   14497 logs.go:123] Gathering logs for kube-controller-manager [f94b194fc3ad] ...
	I0819 11:21:01.410037   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94b194fc3ad"
	I0819 11:21:01.421316   14497 logs.go:123] Gathering logs for storage-provisioner [0e3b67602bd8] ...
	I0819 11:21:01.421327   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3b67602bd8"
	I0819 11:21:01.433067   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:21:01.433078   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:21:03.960042   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:21:08.962382   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:21:08.962878   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:21:09.002578   14497 logs.go:276] 2 containers: [c75ea31785de 73b6ea415881]
	I0819 11:21:09.002712   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:21:09.024566   14497 logs.go:276] 2 containers: [0b6d1c937b20 89cb092cb057]
	I0819 11:21:09.024681   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:21:09.039963   14497 logs.go:276] 1 containers: [2c90137ecacc]
	I0819 11:21:09.040040   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:21:09.052905   14497 logs.go:276] 2 containers: [27115d75bfca d8442dadb356]
	I0819 11:21:09.052975   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:21:09.064168   14497 logs.go:276] 1 containers: [a32341ff1eda]
	I0819 11:21:09.064239   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:21:09.074632   14497 logs.go:276] 2 containers: [8908fc229d49 f94b194fc3ad]
	I0819 11:21:09.074695   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:21:09.088675   14497 logs.go:276] 0 containers: []
	W0819 11:21:09.088692   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:21:09.088753   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:21:09.099338   14497 logs.go:276] 2 containers: [2338601903cd 0e3b67602bd8]
	I0819 11:21:09.099353   14497 logs.go:123] Gathering logs for etcd [0b6d1c937b20] ...
	I0819 11:21:09.099359   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b6d1c937b20"
	I0819 11:21:09.113650   14497 logs.go:123] Gathering logs for kube-scheduler [27115d75bfca] ...
	I0819 11:21:09.113663   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27115d75bfca"
	I0819 11:21:09.125843   14497 logs.go:123] Gathering logs for kube-scheduler [d8442dadb356] ...
	I0819 11:21:09.125853   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8442dadb356"
	I0819 11:21:09.138536   14497 logs.go:123] Gathering logs for kube-proxy [a32341ff1eda] ...
	I0819 11:21:09.138545   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32341ff1eda"
	I0819 11:21:09.151251   14497 logs.go:123] Gathering logs for kube-controller-manager [8908fc229d49] ...
	I0819 11:21:09.151263   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8908fc229d49"
	I0819 11:21:09.169582   14497 logs.go:123] Gathering logs for storage-provisioner [0e3b67602bd8] ...
	I0819 11:21:09.169595   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3b67602bd8"
	I0819 11:21:09.182134   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:21:09.182150   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:21:09.186445   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:21:09.186451   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:21:09.222634   14497 logs.go:123] Gathering logs for storage-provisioner [2338601903cd] ...
	I0819 11:21:09.222643   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2338601903cd"
	I0819 11:21:09.240589   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:21:09.240601   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:21:09.253813   14497 logs.go:123] Gathering logs for kube-controller-manager [f94b194fc3ad] ...
	I0819 11:21:09.253826   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94b194fc3ad"
	I0819 11:21:09.266263   14497 logs.go:123] Gathering logs for kube-apiserver [c75ea31785de] ...
	I0819 11:21:09.266275   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75ea31785de"
	I0819 11:21:09.280739   14497 logs.go:123] Gathering logs for kube-apiserver [73b6ea415881] ...
	I0819 11:21:09.280747   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73b6ea415881"
	I0819 11:21:09.293154   14497 logs.go:123] Gathering logs for etcd [89cb092cb057] ...
	I0819 11:21:09.293166   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cb092cb057"
	I0819 11:21:09.307271   14497 logs.go:123] Gathering logs for coredns [2c90137ecacc] ...
	I0819 11:21:09.307284   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c90137ecacc"
	I0819 11:21:09.324653   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:21:09.324666   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:21:09.347295   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:21:09.347305   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:21:11.897947   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:21:13.343914   14738 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000/config.json ...
	I0819 11:21:13.344798   14738 machine.go:93] provisionDockerMachine start ...
	I0819 11:21:13.344957   14738 main.go:141] libmachine: Using SSH client type: native
	I0819 11:21:13.345520   14738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a885a0] 0x104a8ae00 <nil>  [] 0s} localhost 52361 <nil> <nil>}
	I0819 11:21:13.345535   14738 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 11:21:13.416619   14738 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 11:21:13.416642   14738 buildroot.go:166] provisioning hostname "stopped-upgrade-163000"
	I0819 11:21:13.416722   14738 main.go:141] libmachine: Using SSH client type: native
	I0819 11:21:13.416887   14738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a885a0] 0x104a8ae00 <nil>  [] 0s} localhost 52361 <nil> <nil>}
	I0819 11:21:13.416896   14738 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-163000 && echo "stopped-upgrade-163000" | sudo tee /etc/hostname
	I0819 11:21:13.476766   14738 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-163000
	
	I0819 11:21:13.476822   14738 main.go:141] libmachine: Using SSH client type: native
	I0819 11:21:13.476975   14738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a885a0] 0x104a8ae00 <nil>  [] 0s} localhost 52361 <nil> <nil>}
	I0819 11:21:13.476989   14738 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-163000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-163000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-163000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 11:21:13.533847   14738 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 11:21:13.533864   14738 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19468-11838/.minikube CaCertPath:/Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19468-11838/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19468-11838/.minikube}
	I0819 11:21:13.533872   14738 buildroot.go:174] setting up certificates
	I0819 11:21:13.533880   14738 provision.go:84] configureAuth start
	I0819 11:21:13.533885   14738 provision.go:143] copyHostCerts
	I0819 11:21:13.533958   14738 exec_runner.go:144] found /Users/jenkins/minikube-integration/19468-11838/.minikube/key.pem, removing ...
	I0819 11:21:13.533963   14738 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19468-11838/.minikube/key.pem
	I0819 11:21:13.534061   14738 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19468-11838/.minikube/key.pem (1675 bytes)
	I0819 11:21:13.534231   14738 exec_runner.go:144] found /Users/jenkins/minikube-integration/19468-11838/.minikube/ca.pem, removing ...
	I0819 11:21:13.534234   14738 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19468-11838/.minikube/ca.pem
	I0819 11:21:13.534285   14738 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19468-11838/.minikube/ca.pem (1082 bytes)
	I0819 11:21:13.534387   14738 exec_runner.go:144] found /Users/jenkins/minikube-integration/19468-11838/.minikube/cert.pem, removing ...
	I0819 11:21:13.534391   14738 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19468-11838/.minikube/cert.pem
	I0819 11:21:13.534437   14738 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19468-11838/.minikube/cert.pem (1123 bytes)
	I0819 11:21:13.534553   14738 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-163000 san=[127.0.0.1 localhost minikube stopped-upgrade-163000]
	I0819 11:21:13.618788   14738 provision.go:177] copyRemoteCerts
	I0819 11:21:13.618826   14738 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 11:21:13.618833   14738 sshutil.go:53] new ssh client: &{IP:localhost Port:52361 SSHKeyPath:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/stopped-upgrade-163000/id_rsa Username:docker}
	I0819 11:21:13.647412   14738 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 11:21:13.654583   14738 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0819 11:21:13.661833   14738 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 11:21:13.668517   14738 provision.go:87] duration metric: took 134.633292ms to configureAuth
	I0819 11:21:13.668526   14738 buildroot.go:189] setting minikube options for container-runtime
	I0819 11:21:13.668628   14738 config.go:182] Loaded profile config "stopped-upgrade-163000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 11:21:13.668682   14738 main.go:141] libmachine: Using SSH client type: native
	I0819 11:21:13.668772   14738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a885a0] 0x104a8ae00 <nil>  [] 0s} localhost 52361 <nil> <nil>}
	I0819 11:21:13.668776   14738 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 11:21:13.723532   14738 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 11:21:13.723542   14738 buildroot.go:70] root file system type: tmpfs
	I0819 11:21:13.723591   14738 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 11:21:13.723636   14738 main.go:141] libmachine: Using SSH client type: native
	I0819 11:21:13.723744   14738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a885a0] 0x104a8ae00 <nil>  [] 0s} localhost 52361 <nil> <nil>}
	I0819 11:21:13.723803   14738 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 11:21:13.781149   14738 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 11:21:13.781206   14738 main.go:141] libmachine: Using SSH client type: native
	I0819 11:21:13.781327   14738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a885a0] 0x104a8ae00 <nil>  [] 0s} localhost 52361 <nil> <nil>}
	I0819 11:21:13.781334   14738 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 11:21:14.105609   14738 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 11:21:14.105622   14738 machine.go:96] duration metric: took 760.814375ms to provisionDockerMachine
	I0819 11:21:14.105633   14738 start.go:293] postStartSetup for "stopped-upgrade-163000" (driver="qemu2")
	I0819 11:21:14.105641   14738 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 11:21:14.105718   14738 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 11:21:14.105742   14738 sshutil.go:53] new ssh client: &{IP:localhost Port:52361 SSHKeyPath:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/stopped-upgrade-163000/id_rsa Username:docker}
	I0819 11:21:14.134990   14738 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 11:21:14.136266   14738 info.go:137] Remote host: Buildroot 2021.02.12
	I0819 11:21:14.136275   14738 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19468-11838/.minikube/addons for local assets ...
	I0819 11:21:14.136375   14738 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19468-11838/.minikube/files for local assets ...
	I0819 11:21:14.136497   14738 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19468-11838/.minikube/files/etc/ssl/certs/123172.pem -> 123172.pem in /etc/ssl/certs
	I0819 11:21:14.136628   14738 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 11:21:14.139153   14738 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/files/etc/ssl/certs/123172.pem --> /etc/ssl/certs/123172.pem (1708 bytes)
	I0819 11:21:14.146203   14738 start.go:296] duration metric: took 40.5645ms for postStartSetup
	I0819 11:21:14.146218   14738 fix.go:56] duration metric: took 20.42039125s for fixHost
	I0819 11:21:14.146251   14738 main.go:141] libmachine: Using SSH client type: native
	I0819 11:21:14.146354   14738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a885a0] 0x104a8ae00 <nil>  [] 0s} localhost 52361 <nil> <nil>}
	I0819 11:21:14.146359   14738 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 11:21:14.198480   14738 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724091674.607215504
	
	I0819 11:21:14.198488   14738 fix.go:216] guest clock: 1724091674.607215504
	I0819 11:21:14.198492   14738 fix.go:229] Guest: 2024-08-19 11:21:14.607215504 -0700 PDT Remote: 2024-08-19 11:21:14.146219 -0700 PDT m=+20.526484792 (delta=460.996504ms)
	I0819 11:21:14.198502   14738 fix.go:200] guest clock delta is within tolerance: 460.996504ms
	I0819 11:21:14.198505   14738 start.go:83] releasing machines lock for "stopped-upgrade-163000", held for 20.4726885s
	I0819 11:21:14.198565   14738 ssh_runner.go:195] Run: cat /version.json
	I0819 11:21:14.198577   14738 sshutil.go:53] new ssh client: &{IP:localhost Port:52361 SSHKeyPath:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/stopped-upgrade-163000/id_rsa Username:docker}
	I0819 11:21:14.198629   14738 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 11:21:14.198666   14738 sshutil.go:53] new ssh client: &{IP:localhost Port:52361 SSHKeyPath:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/stopped-upgrade-163000/id_rsa Username:docker}
	W0819 11:21:14.227267   14738 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0819 11:21:14.227315   14738 ssh_runner.go:195] Run: systemctl --version
	I0819 11:21:14.229025   14738 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 11:21:14.230551   14738 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 11:21:14.230577   14738 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0819 11:21:14.233449   14738 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0819 11:21:14.237797   14738 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 11:21:14.237806   14738 start.go:495] detecting cgroup driver to use...
	I0819 11:21:14.237870   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 11:21:14.244734   14738 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0819 11:21:14.247563   14738 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 11:21:14.250820   14738 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 11:21:14.250852   14738 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 11:21:14.254035   14738 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 11:21:14.256800   14738 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 11:21:14.259492   14738 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 11:21:14.262870   14738 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 11:21:14.266172   14738 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 11:21:14.269099   14738 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 11:21:14.271797   14738 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 11:21:14.275040   14738 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 11:21:14.278162   14738 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 11:21:14.280728   14738 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:21:14.356513   14738 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 11:21:14.365354   14738 start.go:495] detecting cgroup driver to use...
	I0819 11:21:14.365433   14738 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 11:21:14.372853   14738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 11:21:14.416853   14738 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 11:21:14.423454   14738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 11:21:14.428666   14738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 11:21:14.433218   14738 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 11:21:14.490565   14738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 11:21:14.496046   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 11:21:14.502094   14738 ssh_runner.go:195] Run: which cri-dockerd
	I0819 11:21:14.503444   14738 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 11:21:14.506534   14738 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0819 11:21:14.511667   14738 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 11:21:14.574167   14738 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 11:21:14.651433   14738 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 11:21:14.651497   14738 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 11:21:14.656854   14738 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:21:14.722883   14738 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 11:21:15.877407   14738 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.154512625s)
	I0819 11:21:15.877464   14738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0819 11:21:15.881859   14738 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0819 11:21:15.888548   14738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 11:21:15.893293   14738 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0819 11:21:15.953898   14738 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0819 11:21:16.012943   14738 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:21:16.080660   14738 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0819 11:21:16.086556   14738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 11:21:16.090915   14738 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:21:16.153616   14738 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0819 11:21:16.191491   14738 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0819 11:21:16.191570   14738 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0819 11:21:16.193788   14738 start.go:563] Will wait 60s for crictl version
	I0819 11:21:16.193844   14738 ssh_runner.go:195] Run: which crictl
	I0819 11:21:16.195256   14738 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 11:21:16.210667   14738 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0819 11:21:16.210736   14738 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 11:21:16.226895   14738 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 11:21:16.898601   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:21:16.898710   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:21:16.910836   14497 logs.go:276] 2 containers: [c75ea31785de 73b6ea415881]
	I0819 11:21:16.910910   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:21:16.925280   14497 logs.go:276] 2 containers: [0b6d1c937b20 89cb092cb057]
	I0819 11:21:16.925359   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:21:16.936783   14497 logs.go:276] 1 containers: [2c90137ecacc]
	I0819 11:21:16.936865   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:21:16.948492   14497 logs.go:276] 2 containers: [27115d75bfca d8442dadb356]
	I0819 11:21:16.948564   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:21:16.960396   14497 logs.go:276] 1 containers: [a32341ff1eda]
	I0819 11:21:16.960474   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:21:16.971977   14497 logs.go:276] 2 containers: [8908fc229d49 f94b194fc3ad]
	I0819 11:21:16.972054   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:21:16.984014   14497 logs.go:276] 0 containers: []
	W0819 11:21:16.984029   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:21:16.984087   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:21:16.995347   14497 logs.go:276] 2 containers: [2338601903cd 0e3b67602bd8]
	I0819 11:21:16.995365   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:21:16.995371   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:21:17.035928   14497 logs.go:123] Gathering logs for kube-proxy [a32341ff1eda] ...
	I0819 11:21:17.035942   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32341ff1eda"
	I0819 11:21:17.049991   14497 logs.go:123] Gathering logs for kube-controller-manager [8908fc229d49] ...
	I0819 11:21:17.050003   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8908fc229d49"
	I0819 11:21:17.068817   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:21:17.068832   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:21:17.082442   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:21:17.082454   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:21:17.122908   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:21:17.122930   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:21:17.128126   14497 logs.go:123] Gathering logs for kube-apiserver [c75ea31785de] ...
	I0819 11:21:17.128135   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75ea31785de"
	I0819 11:21:17.143739   14497 logs.go:123] Gathering logs for etcd [89cb092cb057] ...
	I0819 11:21:17.143754   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cb092cb057"
	I0819 11:21:17.159757   14497 logs.go:123] Gathering logs for kube-scheduler [27115d75bfca] ...
	I0819 11:21:17.159770   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27115d75bfca"
	I0819 11:21:17.176909   14497 logs.go:123] Gathering logs for storage-provisioner [0e3b67602bd8] ...
	I0819 11:21:17.176923   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3b67602bd8"
	I0819 11:21:17.189372   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:21:17.189385   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:21:16.248143   14738 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0819 11:21:16.248260   14738 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0819 11:21:16.249548   14738 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 11:21:16.253120   14738 kubeadm.go:883] updating cluster {Name:stopped-upgrade-163000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52396 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-163000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0819 11:21:16.253174   14738 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0819 11:21:16.253211   14738 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 11:21:16.263409   14738 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0819 11:21:16.263423   14738 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0819 11:21:16.263467   14738 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0819 11:21:16.266886   14738 ssh_runner.go:195] Run: which lz4
	I0819 11:21:16.268227   14738 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 11:21:16.269499   14738 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 11:21:16.269510   14738 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0819 11:21:17.174558   14738 docker.go:649] duration metric: took 906.36575ms to copy over tarball
	I0819 11:21:17.174622   14738 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 11:21:18.350299   14738 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.17567075s)
	I0819 11:21:18.350314   14738 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 11:21:18.365835   14738 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0819 11:21:18.368819   14738 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0819 11:21:18.373886   14738 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:21:18.431240   14738 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 11:21:17.213177   14497 logs.go:123] Gathering logs for coredns [2c90137ecacc] ...
	I0819 11:21:17.213197   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c90137ecacc"
	I0819 11:21:17.228749   14497 logs.go:123] Gathering logs for kube-controller-manager [f94b194fc3ad] ...
	I0819 11:21:17.228761   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94b194fc3ad"
	I0819 11:21:17.242126   14497 logs.go:123] Gathering logs for storage-provisioner [2338601903cd] ...
	I0819 11:21:17.242138   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2338601903cd"
	I0819 11:21:17.255115   14497 logs.go:123] Gathering logs for kube-apiserver [73b6ea415881] ...
	I0819 11:21:17.255128   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73b6ea415881"
	I0819 11:21:17.273835   14497 logs.go:123] Gathering logs for etcd [0b6d1c937b20] ...
	I0819 11:21:17.273847   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b6d1c937b20"
	I0819 11:21:17.295332   14497 logs.go:123] Gathering logs for kube-scheduler [d8442dadb356] ...
	I0819 11:21:17.295349   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8442dadb356"
	I0819 11:21:19.811405   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:21:19.944169   14738 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.512916333s)
	I0819 11:21:19.944253   14738 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 11:21:19.956992   14738 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0819 11:21:19.957002   14738 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0819 11:21:19.957007   14738 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 11:21:19.961046   14738 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:21:19.962564   14738 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0819 11:21:19.964659   14738 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0819 11:21:19.964733   14738 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:21:19.966596   14738 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0819 11:21:19.966711   14738 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 11:21:19.968108   14738 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0819 11:21:19.968124   14738 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0819 11:21:19.969316   14738 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 11:21:19.969345   14738 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0819 11:21:19.970544   14738 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 11:21:19.970547   14738 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0819 11:21:19.972103   14738 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0819 11:21:19.972144   14738 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0819 11:21:19.972951   14738 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 11:21:19.974095   14738 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0819 11:21:20.408624   14738 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0819 11:21:20.420567   14738 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0819 11:21:20.420594   14738 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0819 11:21:20.420640   14738 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0819 11:21:20.421573   14738 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 11:21:20.421758   14738 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0819 11:21:20.426692   14738 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0819 11:21:20.436589   14738 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0819 11:21:20.439780   14738 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0819 11:21:20.449501   14738 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0819 11:21:20.449523   14738 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 11:21:20.449575   14738 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 11:21:20.449584   14738 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0819 11:21:20.449630   14738 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0819 11:21:20.449648   14738 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0819 11:21:20.449663   14738 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0819 11:21:20.449676   14738 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0819 11:21:20.449716   14738 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	W0819 11:21:20.456102   14738 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0819 11:21:20.456223   14738 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0819 11:21:20.461296   14738 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0819 11:21:20.461312   14738 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0819 11:21:20.461361   14738 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0819 11:21:20.467064   14738 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0819 11:21:20.482781   14738 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0819 11:21:20.482844   14738 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0819 11:21:20.482879   14738 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0819 11:21:20.490888   14738 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0819 11:21:20.490908   14738 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 11:21:20.490962   14738 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0819 11:21:20.490980   14738 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0819 11:21:20.491078   14738 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0819 11:21:20.492860   14738 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0819 11:21:20.492877   14738 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0819 11:21:20.492909   14738 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0819 11:21:20.508625   14738 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0819 11:21:20.508646   14738 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0819 11:21:20.508658   14738 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0819 11:21:20.508677   14738 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0819 11:21:20.508744   14738 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0819 11:21:20.508766   14738 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0819 11:21:20.521632   14738 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0819 11:21:20.521661   14738 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0819 11:21:20.521699   14738 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0819 11:21:20.521711   14738 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0819 11:21:20.555957   14738 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0819 11:21:20.555973   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0819 11:21:20.564500   14738 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0819 11:21:20.564618   14738 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:21:20.630781   14738 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0819 11:21:20.630809   14738 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:21:20.630834   14738 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0819 11:21:20.630869   14738 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:21:20.633175   14738 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0819 11:21:20.633189   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0819 11:21:20.667580   14738 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0819 11:21:20.667711   14738 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0819 11:21:20.745002   14738 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0819 11:21:20.745016   14738 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0819 11:21:20.745048   14738 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0819 11:21:20.825107   14738 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0819 11:21:20.825176   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0819 11:21:21.143522   14738 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0819 11:21:21.143544   14738 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0819 11:21:21.143550   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0819 11:21:21.292005   14738 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0819 11:21:21.292043   14738 cache_images.go:92] duration metric: took 1.33503s to LoadCachedImages
	W0819 11:21:21.292091   14738 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0819 11:21:21.292097   14738 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0819 11:21:21.292153   14738 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-163000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-163000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 11:21:21.292215   14738 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0819 11:21:21.305723   14738 cni.go:84] Creating CNI manager for ""
	I0819 11:21:21.305736   14738 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:21:21.305741   14738 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 11:21:21.305750   14738 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-163000 NodeName:stopped-upgrade-163000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 11:21:21.305814   14738 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-163000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 11:21:21.305880   14738 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0819 11:21:21.308771   14738 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 11:21:21.308806   14738 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 11:21:21.311289   14738 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0819 11:21:21.316368   14738 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 11:21:21.320879   14738 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0819 11:21:21.326103   14738 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0819 11:21:21.327362   14738 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 11:21:21.330979   14738 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:21:21.394113   14738 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 11:21:21.401325   14738 certs.go:68] Setting up /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000 for IP: 10.0.2.15
	I0819 11:21:21.401334   14738 certs.go:194] generating shared ca certs ...
	I0819 11:21:21.401342   14738 certs.go:226] acquiring lock for ca certs: {Name:mka749b3c39f634f903dfb76b75647518084e393 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:21:21.401509   14738 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19468-11838/.minikube/ca.key
	I0819 11:21:21.401564   14738 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19468-11838/.minikube/proxy-client-ca.key
	I0819 11:21:21.401570   14738 certs.go:256] generating profile certs ...
	I0819 11:21:21.401643   14738 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000/client.key
	I0819 11:21:21.401661   14738 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000/apiserver.key.17ddb0fc
	I0819 11:21:21.401673   14738 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000/apiserver.crt.17ddb0fc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0819 11:21:21.485600   14738 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000/apiserver.crt.17ddb0fc ...
	I0819 11:21:21.485613   14738 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000/apiserver.crt.17ddb0fc: {Name:mk6dc61fc842d4303f5e2be91343e2942c462b42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:21:21.485910   14738 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000/apiserver.key.17ddb0fc ...
	I0819 11:21:21.485923   14738 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000/apiserver.key.17ddb0fc: {Name:mkd32adbd348a4236fe43d6c4009602ecea8788e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:21:21.486057   14738 certs.go:381] copying /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000/apiserver.crt.17ddb0fc -> /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000/apiserver.crt
	I0819 11:21:21.486193   14738 certs.go:385] copying /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000/apiserver.key.17ddb0fc -> /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000/apiserver.key
	I0819 11:21:21.486414   14738 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000/proxy-client.key
	I0819 11:21:21.486549   14738 certs.go:484] found cert: /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/12317.pem (1338 bytes)
	W0819 11:21:21.486580   14738 certs.go:480] ignoring /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/12317_empty.pem, impossibly tiny 0 bytes
	I0819 11:21:21.486590   14738 certs.go:484] found cert: /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 11:21:21.486610   14738 certs.go:484] found cert: /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem (1082 bytes)
	I0819 11:21:21.486641   14738 certs.go:484] found cert: /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem (1123 bytes)
	I0819 11:21:21.486664   14738 certs.go:484] found cert: /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/key.pem (1675 bytes)
	I0819 11:21:21.486702   14738 certs.go:484] found cert: /Users/jenkins/minikube-integration/19468-11838/.minikube/files/etc/ssl/certs/123172.pem (1708 bytes)
	I0819 11:21:21.487049   14738 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 11:21:21.494237   14738 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 11:21:21.500733   14738 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 11:21:21.507366   14738 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 11:21:21.514750   14738 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0819 11:21:21.522307   14738 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 11:21:21.528992   14738 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 11:21:21.535401   14738 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 11:21:21.542505   14738 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/12317.pem --> /usr/share/ca-certificates/12317.pem (1338 bytes)
	I0819 11:21:21.549564   14738 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/files/etc/ssl/certs/123172.pem --> /usr/share/ca-certificates/123172.pem (1708 bytes)
	I0819 11:21:21.556100   14738 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 11:21:21.562824   14738 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 11:21:21.567997   14738 ssh_runner.go:195] Run: openssl version
	I0819 11:21:21.569850   14738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12317.pem && ln -fs /usr/share/ca-certificates/12317.pem /etc/ssl/certs/12317.pem"
	I0819 11:21:21.572667   14738 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12317.pem
	I0819 11:21:21.574106   14738 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 18:06 /usr/share/ca-certificates/12317.pem
	I0819 11:21:21.574126   14738 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12317.pem
	I0819 11:21:21.576025   14738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12317.pem /etc/ssl/certs/51391683.0"
	I0819 11:21:21.579247   14738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/123172.pem && ln -fs /usr/share/ca-certificates/123172.pem /etc/ssl/certs/123172.pem"
	I0819 11:21:21.582652   14738 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123172.pem
	I0819 11:21:21.584097   14738 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 18:06 /usr/share/ca-certificates/123172.pem
	I0819 11:21:21.584118   14738 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123172.pem
	I0819 11:21:21.585894   14738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/123172.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 11:21:21.588811   14738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 11:21:21.591553   14738 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:21:21.593133   14738 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:21:21.593156   14738 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:21:21.594932   14738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 11:21:21.598296   14738 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 11:21:21.599725   14738 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 11:21:21.601858   14738 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 11:21:21.603699   14738 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 11:21:21.605643   14738 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 11:21:21.607465   14738 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 11:21:21.609298   14738 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 11:21:21.611102   14738 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-163000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52396 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-163000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0819 11:21:21.611162   14738 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0819 11:21:21.625439   14738 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 11:21:21.628396   14738 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 11:21:21.628402   14738 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 11:21:21.628422   14738 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 11:21:21.631418   14738 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 11:21:21.631710   14738 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-163000" does not appear in /Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:21:21.631811   14738 kubeconfig.go:62] /Users/jenkins/minikube-integration/19468-11838/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-163000" cluster setting kubeconfig missing "stopped-upgrade-163000" context setting]
	I0819 11:21:21.631991   14738 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19468-11838/kubeconfig: {Name:mkf06e67426049c2259f6e46b5143872117d8aeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:21:21.632422   14738 kapi.go:59] client config for stopped-upgrade-163000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000/client.key", CAFile:"/Users/jenkins/minikube-integration/19468-11838/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106043d10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 11:21:21.632745   14738 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 11:21:21.635200   14738 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-163000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0819 11:21:21.635206   14738 kubeadm.go:1160] stopping kube-system containers ...
	I0819 11:21:21.635239   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0819 11:21:21.645979   14738 docker.go:483] Stopping containers: [cba74a0177d5 bd9cc3b824ba e664d2838747 5b1fce91598f 70ca7c1620fa c9b1bc8e1717 b0d0e25e65a0 0be0dd934796]
	I0819 11:21:21.646042   14738 ssh_runner.go:195] Run: docker stop cba74a0177d5 bd9cc3b824ba e664d2838747 5b1fce91598f 70ca7c1620fa c9b1bc8e1717 b0d0e25e65a0 0be0dd934796
	I0819 11:21:21.656717   14738 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 11:21:21.662182   14738 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 11:21:21.664830   14738 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 11:21:21.664835   14738 kubeadm.go:157] found existing configuration files:
	
	I0819 11:21:21.664855   14738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52396 /etc/kubernetes/admin.conf
	I0819 11:21:21.667205   14738 kubeadm.go:163] "https://control-plane.minikube.internal:52396" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52396 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 11:21:21.667222   14738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 11:21:21.670200   14738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52396 /etc/kubernetes/kubelet.conf
	I0819 11:21:21.672908   14738 kubeadm.go:163] "https://control-plane.minikube.internal:52396" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52396 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 11:21:21.672935   14738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 11:21:21.675430   14738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52396 /etc/kubernetes/controller-manager.conf
	I0819 11:21:21.678465   14738 kubeadm.go:163] "https://control-plane.minikube.internal:52396" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52396 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 11:21:21.678488   14738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 11:21:21.681321   14738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52396 /etc/kubernetes/scheduler.conf
	I0819 11:21:21.683598   14738 kubeadm.go:163] "https://control-plane.minikube.internal:52396" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52396 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 11:21:21.683619   14738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 11:21:21.686775   14738 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 11:21:21.689974   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 11:21:21.712534   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 11:21:22.331606   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 11:21:22.452432   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 11:21:22.479054   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 11:21:22.507688   14738 api_server.go:52] waiting for apiserver process to appear ...
	I0819 11:21:22.507770   14738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:21:23.009875   14738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:21:23.509840   14738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:21:23.514452   14738 api_server.go:72] duration metric: took 1.006770417s to wait for apiserver process to appear ...
	I0819 11:21:23.514462   14738 api_server.go:88] waiting for apiserver healthz status ...
	I0819 11:21:23.514470   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:21:24.813662   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:21:24.813888   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:21:24.844769   14497 logs.go:276] 2 containers: [c75ea31785de 73b6ea415881]
	I0819 11:21:24.844893   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:21:24.868346   14497 logs.go:276] 2 containers: [0b6d1c937b20 89cb092cb057]
	I0819 11:21:24.868457   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:21:24.881485   14497 logs.go:276] 1 containers: [2c90137ecacc]
	I0819 11:21:24.881563   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:21:24.893393   14497 logs.go:276] 2 containers: [27115d75bfca d8442dadb356]
	I0819 11:21:24.893469   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:21:24.905744   14497 logs.go:276] 1 containers: [a32341ff1eda]
	I0819 11:21:24.905807   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:21:24.916577   14497 logs.go:276] 2 containers: [8908fc229d49 f94b194fc3ad]
	I0819 11:21:24.916647   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:21:24.926636   14497 logs.go:276] 0 containers: []
	W0819 11:21:24.926647   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:21:24.926704   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:21:24.937244   14497 logs.go:276] 2 containers: [2338601903cd 0e3b67602bd8]
	I0819 11:21:24.937265   14497 logs.go:123] Gathering logs for kube-scheduler [27115d75bfca] ...
	I0819 11:21:24.937270   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27115d75bfca"
	I0819 11:21:24.949102   14497 logs.go:123] Gathering logs for kube-controller-manager [8908fc229d49] ...
	I0819 11:21:24.949116   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8908fc229d49"
	I0819 11:21:24.966570   14497 logs.go:123] Gathering logs for kube-controller-manager [f94b194fc3ad] ...
	I0819 11:21:24.966584   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94b194fc3ad"
	I0819 11:21:24.977881   14497 logs.go:123] Gathering logs for storage-provisioner [2338601903cd] ...
	I0819 11:21:24.977895   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2338601903cd"
	I0819 11:21:24.989441   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:21:24.989454   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:21:24.993825   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:21:24.993835   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:21:25.029485   14497 logs.go:123] Gathering logs for kube-apiserver [c75ea31785de] ...
	I0819 11:21:25.029499   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75ea31785de"
	I0819 11:21:25.043600   14497 logs.go:123] Gathering logs for kube-apiserver [73b6ea415881] ...
	I0819 11:21:25.043614   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73b6ea415881"
	I0819 11:21:25.056475   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:21:25.056486   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:21:25.078673   14497 logs.go:123] Gathering logs for etcd [0b6d1c937b20] ...
	I0819 11:21:25.078681   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b6d1c937b20"
	I0819 11:21:25.092144   14497 logs.go:123] Gathering logs for kube-scheduler [d8442dadb356] ...
	I0819 11:21:25.092155   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8442dadb356"
	I0819 11:21:25.106017   14497 logs.go:123] Gathering logs for kube-proxy [a32341ff1eda] ...
	I0819 11:21:25.106027   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32341ff1eda"
	I0819 11:21:25.117690   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:21:25.117699   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:21:25.129641   14497 logs.go:123] Gathering logs for etcd [89cb092cb057] ...
	I0819 11:21:25.129651   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cb092cb057"
	I0819 11:21:25.147566   14497 logs.go:123] Gathering logs for storage-provisioner [0e3b67602bd8] ...
	I0819 11:21:25.147575   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3b67602bd8"
	I0819 11:21:25.158513   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:21:25.158525   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:21:25.194935   14497 logs.go:123] Gathering logs for coredns [2c90137ecacc] ...
	I0819 11:21:25.194942   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c90137ecacc"
	I0819 11:21:28.516612   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:21:28.516658   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:21:27.709011   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:21:33.517042   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:21:33.517096   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:21:32.709395   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:21:32.709486   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:21:32.720247   14497 logs.go:276] 2 containers: [c75ea31785de 73b6ea415881]
	I0819 11:21:32.720315   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:21:32.730936   14497 logs.go:276] 2 containers: [0b6d1c937b20 89cb092cb057]
	I0819 11:21:32.731000   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:21:32.742353   14497 logs.go:276] 1 containers: [2c90137ecacc]
	I0819 11:21:32.742418   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:21:32.753443   14497 logs.go:276] 2 containers: [27115d75bfca d8442dadb356]
	I0819 11:21:32.753511   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:21:32.763906   14497 logs.go:276] 1 containers: [a32341ff1eda]
	I0819 11:21:32.763970   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:21:32.774790   14497 logs.go:276] 2 containers: [8908fc229d49 f94b194fc3ad]
	I0819 11:21:32.774853   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:21:32.784573   14497 logs.go:276] 0 containers: []
	W0819 11:21:32.784584   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:21:32.784641   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:21:32.795166   14497 logs.go:276] 2 containers: [2338601903cd 0e3b67602bd8]
	I0819 11:21:32.795188   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:21:32.795196   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:21:32.829596   14497 logs.go:123] Gathering logs for kube-scheduler [27115d75bfca] ...
	I0819 11:21:32.829608   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27115d75bfca"
	I0819 11:21:32.843706   14497 logs.go:123] Gathering logs for kube-proxy [a32341ff1eda] ...
	I0819 11:21:32.843717   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32341ff1eda"
	I0819 11:21:32.857894   14497 logs.go:123] Gathering logs for storage-provisioner [0e3b67602bd8] ...
	I0819 11:21:32.857906   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3b67602bd8"
	I0819 11:21:32.869093   14497 logs.go:123] Gathering logs for kube-apiserver [73b6ea415881] ...
	I0819 11:21:32.869104   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73b6ea415881"
	I0819 11:21:32.889574   14497 logs.go:123] Gathering logs for etcd [89cb092cb057] ...
	I0819 11:21:32.889586   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cb092cb057"
	I0819 11:21:32.907563   14497 logs.go:123] Gathering logs for storage-provisioner [2338601903cd] ...
	I0819 11:21:32.907574   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2338601903cd"
	I0819 11:21:32.922908   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:21:32.922919   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:21:32.934653   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:21:32.934664   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:21:32.939211   14497 logs.go:123] Gathering logs for etcd [0b6d1c937b20] ...
	I0819 11:21:32.939219   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b6d1c937b20"
	I0819 11:21:32.953468   14497 logs.go:123] Gathering logs for coredns [2c90137ecacc] ...
	I0819 11:21:32.953479   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c90137ecacc"
	I0819 11:21:32.964910   14497 logs.go:123] Gathering logs for kube-scheduler [d8442dadb356] ...
	I0819 11:21:32.964921   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8442dadb356"
	I0819 11:21:32.976617   14497 logs.go:123] Gathering logs for kube-controller-manager [f94b194fc3ad] ...
	I0819 11:21:32.976630   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94b194fc3ad"
	I0819 11:21:32.987739   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:21:32.987749   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:21:33.023566   14497 logs.go:123] Gathering logs for kube-apiserver [c75ea31785de] ...
	I0819 11:21:33.023574   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75ea31785de"
	I0819 11:21:33.038105   14497 logs.go:123] Gathering logs for kube-controller-manager [8908fc229d49] ...
	I0819 11:21:33.038114   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8908fc229d49"
	I0819 11:21:33.056007   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:21:33.056020   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:21:35.580350   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:21:38.517582   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:21:38.517655   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:21:40.582627   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:21:40.582745   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:21:40.594750   14497 logs.go:276] 2 containers: [c75ea31785de 73b6ea415881]
	I0819 11:21:40.594820   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:21:40.605589   14497 logs.go:276] 2 containers: [0b6d1c937b20 89cb092cb057]
	I0819 11:21:40.605652   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:21:40.616658   14497 logs.go:276] 1 containers: [2c90137ecacc]
	I0819 11:21:40.616724   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:21:40.627098   14497 logs.go:276] 2 containers: [27115d75bfca d8442dadb356]
	I0819 11:21:40.627168   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:21:40.637578   14497 logs.go:276] 1 containers: [a32341ff1eda]
	I0819 11:21:40.637637   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:21:40.650071   14497 logs.go:276] 2 containers: [8908fc229d49 f94b194fc3ad]
	I0819 11:21:40.650132   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:21:40.660939   14497 logs.go:276] 0 containers: []
	W0819 11:21:40.660951   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:21:40.661006   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:21:40.672093   14497 logs.go:276] 2 containers: [2338601903cd 0e3b67602bd8]
	I0819 11:21:40.672112   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:21:40.672118   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:21:40.676637   14497 logs.go:123] Gathering logs for kube-apiserver [c75ea31785de] ...
	I0819 11:21:40.676644   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75ea31785de"
	I0819 11:21:40.691611   14497 logs.go:123] Gathering logs for kube-apiserver [73b6ea415881] ...
	I0819 11:21:40.691621   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73b6ea415881"
	I0819 11:21:40.704812   14497 logs.go:123] Gathering logs for kube-proxy [a32341ff1eda] ...
	I0819 11:21:40.704822   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32341ff1eda"
	I0819 11:21:40.716855   14497 logs.go:123] Gathering logs for storage-provisioner [2338601903cd] ...
	I0819 11:21:40.716871   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2338601903cd"
	I0819 11:21:40.729143   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:21:40.729157   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:21:40.753196   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:21:40.753211   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:21:40.791628   14497 logs.go:123] Gathering logs for coredns [2c90137ecacc] ...
	I0819 11:21:40.791650   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c90137ecacc"
	I0819 11:21:40.804160   14497 logs.go:123] Gathering logs for kube-scheduler [d8442dadb356] ...
	I0819 11:21:40.804172   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8442dadb356"
	I0819 11:21:40.817126   14497 logs.go:123] Gathering logs for kube-controller-manager [8908fc229d49] ...
	I0819 11:21:40.817142   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8908fc229d49"
	I0819 11:21:40.834673   14497 logs.go:123] Gathering logs for kube-controller-manager [f94b194fc3ad] ...
	I0819 11:21:40.834682   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94b194fc3ad"
	I0819 11:21:40.846781   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:21:40.846794   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:21:40.859629   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:21:40.859641   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:21:40.894750   14497 logs.go:123] Gathering logs for kube-scheduler [27115d75bfca] ...
	I0819 11:21:40.894760   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27115d75bfca"
	I0819 11:21:40.906415   14497 logs.go:123] Gathering logs for etcd [0b6d1c937b20] ...
	I0819 11:21:40.906425   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b6d1c937b20"
	I0819 11:21:40.920806   14497 logs.go:123] Gathering logs for storage-provisioner [0e3b67602bd8] ...
	I0819 11:21:40.920816   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3b67602bd8"
	I0819 11:21:40.931910   14497 logs.go:123] Gathering logs for etcd [89cb092cb057] ...
	I0819 11:21:40.931920   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cb092cb057"
	I0819 11:21:43.518520   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:21:43.518553   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:21:43.453012   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:21:48.519698   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:21:48.519719   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:21:48.455304   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:21:48.455588   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:21:48.484765   14497 logs.go:276] 2 containers: [c75ea31785de 73b6ea415881]
	I0819 11:21:48.484885   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:21:48.501582   14497 logs.go:276] 2 containers: [0b6d1c937b20 89cb092cb057]
	I0819 11:21:48.501679   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:21:48.515247   14497 logs.go:276] 1 containers: [2c90137ecacc]
	I0819 11:21:48.515326   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:21:48.526650   14497 logs.go:276] 2 containers: [27115d75bfca d8442dadb356]
	I0819 11:21:48.526707   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:21:48.537984   14497 logs.go:276] 1 containers: [a32341ff1eda]
	I0819 11:21:48.538064   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:21:48.548724   14497 logs.go:276] 2 containers: [8908fc229d49 f94b194fc3ad]
	I0819 11:21:48.548792   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:21:48.558691   14497 logs.go:276] 0 containers: []
	W0819 11:21:48.558704   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:21:48.558762   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:21:48.569560   14497 logs.go:276] 2 containers: [2338601903cd 0e3b67602bd8]
	I0819 11:21:48.569577   14497 logs.go:123] Gathering logs for etcd [89cb092cb057] ...
	I0819 11:21:48.569585   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cb092cb057"
	I0819 11:21:48.583788   14497 logs.go:123] Gathering logs for kube-controller-manager [8908fc229d49] ...
	I0819 11:21:48.583799   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8908fc229d49"
	I0819 11:21:48.601291   14497 logs.go:123] Gathering logs for storage-provisioner [2338601903cd] ...
	I0819 11:21:48.601300   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2338601903cd"
	I0819 11:21:48.613004   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:21:48.613014   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:21:48.635842   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:21:48.635851   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:21:48.639833   14497 logs.go:123] Gathering logs for kube-apiserver [c75ea31785de] ...
	I0819 11:21:48.639839   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75ea31785de"
	I0819 11:21:48.654227   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:21:48.654237   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:21:48.693487   14497 logs.go:123] Gathering logs for etcd [0b6d1c937b20] ...
	I0819 11:21:48.693497   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b6d1c937b20"
	I0819 11:21:48.718238   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:21:48.718248   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:21:48.735496   14497 logs.go:123] Gathering logs for kube-apiserver [73b6ea415881] ...
	I0819 11:21:48.735508   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73b6ea415881"
	I0819 11:21:48.748680   14497 logs.go:123] Gathering logs for kube-controller-manager [f94b194fc3ad] ...
	I0819 11:21:48.748690   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f94b194fc3ad"
	I0819 11:21:48.768942   14497 logs.go:123] Gathering logs for kube-scheduler [27115d75bfca] ...
	I0819 11:21:48.768953   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27115d75bfca"
	I0819 11:21:48.780238   14497 logs.go:123] Gathering logs for kube-scheduler [d8442dadb356] ...
	I0819 11:21:48.780248   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8442dadb356"
	I0819 11:21:48.792040   14497 logs.go:123] Gathering logs for kube-proxy [a32341ff1eda] ...
	I0819 11:21:48.792051   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32341ff1eda"
	I0819 11:21:48.803794   14497 logs.go:123] Gathering logs for storage-provisioner [0e3b67602bd8] ...
	I0819 11:21:48.803804   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3b67602bd8"
	I0819 11:21:48.815863   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:21:48.815873   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:21:48.850081   14497 logs.go:123] Gathering logs for coredns [2c90137ecacc] ...
	I0819 11:21:48.850092   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c90137ecacc"
	I0819 11:21:51.364334   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:21:53.520711   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:21:53.520737   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:21:56.366640   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:21:56.366682   14497 kubeadm.go:597] duration metric: took 4m4.11502425s to restartPrimaryControlPlane
	W0819 11:21:56.366715   14497 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 11:21:56.366734   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0819 11:21:57.367902   14497 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.001162625s)
	I0819 11:21:57.367978   14497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:21:57.372990   14497 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 11:21:57.375738   14497 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 11:21:57.378575   14497 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 11:21:57.378581   14497 kubeadm.go:157] found existing configuration files:
	
	I0819 11:21:57.378601   14497 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52176 /etc/kubernetes/admin.conf
	I0819 11:21:57.381663   14497 kubeadm.go:163] "https://control-plane.minikube.internal:52176" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52176 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 11:21:57.381689   14497 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 11:21:57.384646   14497 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52176 /etc/kubernetes/kubelet.conf
	I0819 11:21:57.387053   14497 kubeadm.go:163] "https://control-plane.minikube.internal:52176" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52176 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 11:21:57.387078   14497 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 11:21:57.390142   14497 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52176 /etc/kubernetes/controller-manager.conf
	I0819 11:21:57.392983   14497 kubeadm.go:163] "https://control-plane.minikube.internal:52176" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52176 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 11:21:57.393008   14497 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 11:21:57.395446   14497 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52176 /etc/kubernetes/scheduler.conf
	I0819 11:21:57.398152   14497 kubeadm.go:163] "https://control-plane.minikube.internal:52176" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52176 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 11:21:57.398174   14497 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 11:21:57.401137   14497 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 11:21:57.418198   14497 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0819 11:21:57.418226   14497 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 11:21:57.466086   14497 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 11:21:57.466139   14497 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 11:21:57.466195   14497 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 11:21:57.516197   14497 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 11:21:57.519457   14497 out.go:235]   - Generating certificates and keys ...
	I0819 11:21:57.519490   14497 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 11:21:57.519524   14497 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 11:21:57.519566   14497 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 11:21:57.519595   14497 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 11:21:57.519630   14497 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 11:21:57.519662   14497 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 11:21:57.519696   14497 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 11:21:57.519733   14497 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 11:21:57.519773   14497 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 11:21:57.519813   14497 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 11:21:57.519832   14497 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 11:21:57.519858   14497 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 11:21:57.570399   14497 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 11:21:57.744048   14497 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 11:21:57.790797   14497 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 11:21:57.933781   14497 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 11:21:57.965589   14497 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 11:21:57.965945   14497 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 11:21:57.965966   14497 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 11:21:58.059744   14497 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 11:21:58.522009   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:21:58.522030   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:21:58.063391   14497 out.go:235]   - Booting up control plane ...
	I0819 11:21:58.063520   14497 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 11:21:58.063739   14497 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 11:21:58.063856   14497 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 11:21:58.063972   14497 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 11:21:58.064134   14497 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 11:22:02.561465   14497 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502627 seconds
	I0819 11:22:02.561625   14497 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 11:22:02.566803   14497 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 11:22:03.075989   14497 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 11:22:03.076094   14497 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-015000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 11:22:03.581352   14497 kubeadm.go:310] [bootstrap-token] Using token: fmn8m1.0hyitk70kfiab5le
	I0819 11:22:03.523623   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:22:03.523662   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:22:03.584008   14497 out.go:235]   - Configuring RBAC rules ...
	I0819 11:22:03.584074   14497 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 11:22:03.584144   14497 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 11:22:03.586506   14497 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 11:22:03.590586   14497 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 11:22:03.591462   14497 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 11:22:03.592967   14497 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 11:22:03.595917   14497 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 11:22:03.773901   14497 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 11:22:03.986893   14497 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 11:22:03.987250   14497 kubeadm.go:310] 
	I0819 11:22:03.987281   14497 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 11:22:03.987284   14497 kubeadm.go:310] 
	I0819 11:22:03.987321   14497 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 11:22:03.987323   14497 kubeadm.go:310] 
	I0819 11:22:03.987339   14497 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 11:22:03.987373   14497 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 11:22:03.987398   14497 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 11:22:03.987401   14497 kubeadm.go:310] 
	I0819 11:22:03.987494   14497 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 11:22:03.987499   14497 kubeadm.go:310] 
	I0819 11:22:03.987521   14497 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 11:22:03.987524   14497 kubeadm.go:310] 
	I0819 11:22:03.987550   14497 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 11:22:03.987594   14497 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 11:22:03.987667   14497 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 11:22:03.987671   14497 kubeadm.go:310] 
	I0819 11:22:03.987727   14497 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 11:22:03.987769   14497 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 11:22:03.987775   14497 kubeadm.go:310] 
	I0819 11:22:03.987824   14497 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fmn8m1.0hyitk70kfiab5le \
	I0819 11:22:03.987874   14497 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3f715a0124d50cfae4e4dfc474638f45f1ddd0476a0318801e6849c5425b2951 \
	I0819 11:22:03.987884   14497 kubeadm.go:310] 	--control-plane 
	I0819 11:22:03.987886   14497 kubeadm.go:310] 
	I0819 11:22:03.987925   14497 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 11:22:03.987927   14497 kubeadm.go:310] 
	I0819 11:22:03.987966   14497 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fmn8m1.0hyitk70kfiab5le \
	I0819 11:22:03.988018   14497 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3f715a0124d50cfae4e4dfc474638f45f1ddd0476a0318801e6849c5425b2951 
	I0819 11:22:03.988077   14497 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 11:22:03.988085   14497 cni.go:84] Creating CNI manager for ""
	I0819 11:22:03.988093   14497 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:22:03.991939   14497 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 11:22:03.999911   14497 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 11:22:04.002822   14497 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 11:22:04.007516   14497 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 11:22:04.007557   14497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:22:04.007577   14497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-015000 minikube.k8s.io/updated_at=2024_08_19T11_22_04_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411 minikube.k8s.io/name=running-upgrade-015000 minikube.k8s.io/primary=true
	I0819 11:22:04.048946   14497 ops.go:34] apiserver oom_adj: -16
	I0819 11:22:04.048946   14497 kubeadm.go:1113] duration metric: took 41.420417ms to wait for elevateKubeSystemPrivileges
	I0819 11:22:04.049042   14497 kubeadm.go:394] duration metric: took 4m11.811526917s to StartCluster
	I0819 11:22:04.049055   14497 settings.go:142] acquiring lock: {Name:mk15c923e9a2cce6164c6c5cc70f47fd16c4c208 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:22:04.049209   14497 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:22:04.049576   14497 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19468-11838/kubeconfig: {Name:mkf06e67426049c2259f6e46b5143872117d8aeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:22:04.049761   14497 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:22:04.049772   14497 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 11:22:04.049808   14497 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-015000"
	I0819 11:22:04.049820   14497 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-015000"
	W0819 11:22:04.049824   14497 addons.go:243] addon storage-provisioner should already be in state true
	I0819 11:22:04.049829   14497 config.go:182] Loaded profile config "running-upgrade-015000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 11:22:04.049836   14497 host.go:66] Checking if "running-upgrade-015000" exists ...
	I0819 11:22:04.049884   14497 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-015000"
	I0819 11:22:04.049897   14497 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-015000"
	I0819 11:22:04.050700   14497 kapi.go:59] client config for running-upgrade-015000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/running-upgrade-015000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/running-upgrade-015000/client.key", CAFile:"/Users/jenkins/minikube-integration/19468-11838/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103b1bd10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 11:22:04.050832   14497 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-015000"
	W0819 11:22:04.050838   14497 addons.go:243] addon default-storageclass should already be in state true
	I0819 11:22:04.050845   14497 host.go:66] Checking if "running-upgrade-015000" exists ...
	I0819 11:22:04.053885   14497 out.go:177] * Verifying Kubernetes components...
	I0819 11:22:04.054216   14497 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 11:22:04.058242   14497 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 11:22:04.058249   14497 sshutil.go:53] new ssh client: &{IP:localhost Port:52144 SSHKeyPath:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/running-upgrade-015000/id_rsa Username:docker}
	I0819 11:22:04.060852   14497 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:22:04.068045   14497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:22:04.068109   14497 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 11:22:04.068117   14497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 11:22:04.068121   14497 sshutil.go:53] new ssh client: &{IP:localhost Port:52144 SSHKeyPath:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/running-upgrade-015000/id_rsa Username:docker}
	I0819 11:22:04.151923   14497 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 11:22:04.157225   14497 api_server.go:52] waiting for apiserver process to appear ...
	I0819 11:22:04.157270   14497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:22:04.161307   14497 api_server.go:72] duration metric: took 111.5325ms to wait for apiserver process to appear ...
	I0819 11:22:04.161314   14497 api_server.go:88] waiting for apiserver healthz status ...
	I0819 11:22:04.161321   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:22:04.175020   14497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 11:22:04.253321   14497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 11:22:04.526774   14497 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 11:22:04.526785   14497 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 11:22:08.525765   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:22:08.525841   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:22:09.163391   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:22:09.163436   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:22:13.528327   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:22:13.528368   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:22:14.164178   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:22:14.164199   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:22:18.530656   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:22:18.530720   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:22:19.164585   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:22:19.164644   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:22:23.533096   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:22:23.533212   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:22:23.546347   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:22:23.546414   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:22:23.556734   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:22:23.556801   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:22:23.567587   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:22:23.567651   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:22:23.578104   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:22:23.578176   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:22:23.592357   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:22:23.592419   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:22:23.602938   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:22:23.602996   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:22:23.613378   14738 logs.go:276] 0 containers: []
	W0819 11:22:23.613389   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:22:23.613437   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:22:23.628484   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:22:23.628506   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:22:23.628512   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:22:24.165252   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:22:24.165275   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:22:23.652206   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:22:23.652217   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:22:23.663784   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:22:23.663794   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:22:23.779883   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:22:23.779894   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:22:23.808140   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:22:23.808151   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:22:23.820046   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:22:23.820056   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:22:23.859096   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:22:23.859113   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:22:23.864559   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:22:23.864567   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:22:23.883597   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:22:23.883609   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:22:23.895348   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:22:23.895360   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:22:23.913614   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:22:23.913624   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:22:23.928314   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:22:23.928325   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:22:23.940202   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:22:23.940213   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:22:23.966104   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:22:23.966112   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:22:23.977651   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:22:23.977666   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:22:23.991125   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:22:23.991136   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:22:26.507304   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:22:29.165922   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:22:29.165948   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:22:31.510049   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:22:31.510210   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:22:31.528506   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:22:31.528592   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:22:31.540294   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:22:31.540366   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:22:31.551455   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:22:31.551522   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:22:31.561773   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:22:31.561839   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:22:31.573082   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:22:31.573151   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:22:31.583305   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:22:31.583371   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:22:31.593891   14738 logs.go:276] 0 containers: []
	W0819 11:22:31.593900   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:22:31.593967   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:22:31.604262   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:22:31.604280   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:22:31.604285   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:22:31.608330   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:22:31.608337   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:22:31.646259   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:22:31.646273   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:22:31.661751   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:22:31.661763   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:22:31.675670   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:22:31.675682   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:22:31.697007   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:22:31.697017   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:22:31.723193   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:22:31.723203   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:22:31.748607   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:22:31.748615   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:22:31.759751   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:22:31.759765   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:22:31.797418   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:22:31.797426   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:22:31.811567   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:22:31.811579   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:22:31.825749   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:22:31.825758   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:22:31.836458   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:22:31.836469   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:22:31.857707   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:22:31.857717   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:22:31.884543   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:22:31.884553   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:22:31.896304   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:22:31.896320   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:22:34.166819   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:22:34.166858   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0819 11:22:34.529157   14497 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0819 11:22:34.533615   14497 out.go:177] * Enabled addons: storage-provisioner
	I0819 11:22:34.545488   14497 addons.go:510] duration metric: took 30.495866166s for enable addons: enabled=[storage-provisioner]
	I0819 11:22:34.411612   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:22:39.168052   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:22:39.168097   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:22:39.413919   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:22:39.414025   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:22:39.425724   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:22:39.425798   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:22:39.436467   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:22:39.436530   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:22:39.446910   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:22:39.446990   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:22:39.460952   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:22:39.461033   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:22:39.476265   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:22:39.476334   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:22:39.486726   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:22:39.486790   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:22:39.496887   14738 logs.go:276] 0 containers: []
	W0819 11:22:39.496900   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:22:39.496955   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:22:39.508899   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:22:39.508915   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:22:39.508920   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:22:39.521844   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:22:39.521853   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:22:39.526469   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:22:39.526477   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:22:39.540531   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:22:39.540542   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:22:39.554669   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:22:39.554679   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:22:39.569108   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:22:39.569124   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:22:39.596555   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:22:39.596567   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:22:39.613670   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:22:39.613680   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:22:39.639673   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:22:39.639681   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:22:39.676933   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:22:39.676949   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:22:39.691869   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:22:39.691880   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:22:39.717050   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:22:39.717061   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:22:39.728215   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:22:39.728224   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:22:39.740799   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:22:39.740812   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:22:39.778245   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:22:39.778254   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:22:39.789536   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:22:39.789549   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:22:42.303755   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:22:44.169613   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:22:44.169662   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:22:47.306012   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:22:47.306115   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:22:47.320735   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:22:47.320808   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:22:47.332287   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:22:47.332352   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:22:47.342555   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:22:47.342628   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:22:47.353059   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:22:47.353125   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:22:47.363453   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:22:47.363515   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:22:47.377004   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:22:47.377075   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:22:47.386864   14738 logs.go:276] 0 containers: []
	W0819 11:22:47.386878   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:22:47.386931   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:22:47.402721   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:22:47.402737   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:22:47.402743   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:22:47.414825   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:22:47.414839   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:22:47.460076   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:22:47.460087   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:22:47.484824   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:22:47.484835   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:22:47.523125   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:22:47.523133   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:22:47.563024   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:22:47.563035   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:22:47.584050   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:22:47.584061   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:22:47.615770   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:22:47.615781   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:22:47.627493   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:22:47.627505   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:22:47.640376   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:22:47.640389   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:22:47.644948   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:22:47.644957   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:22:47.659719   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:22:47.659731   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:22:47.679103   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:22:47.679120   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:22:47.693186   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:22:47.693196   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:22:47.703926   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:22:47.703937   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:22:47.716272   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:22:47.716284   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:22:49.171557   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:22:49.171598   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:22:50.236696   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:22:54.173858   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:22:54.173893   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:22:55.238043   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:22:55.238198   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:22:55.250583   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:22:55.250660   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:22:55.263049   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:22:55.263118   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:22:55.273290   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:22:55.273357   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:22:55.283879   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:22:55.283955   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:22:55.294342   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:22:55.294402   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:22:55.304845   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:22:55.304904   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:22:55.319626   14738 logs.go:276] 0 containers: []
	W0819 11:22:55.319639   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:22:55.319693   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:22:55.329918   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:22:55.329935   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:22:55.329941   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:22:55.341621   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:22:55.341632   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:22:55.355988   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:22:55.355999   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:22:55.394163   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:22:55.394173   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:22:55.428500   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:22:55.428511   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:22:55.442782   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:22:55.442793   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:22:55.463906   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:22:55.463917   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:22:55.476723   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:22:55.476735   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:22:55.502227   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:22:55.502238   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:22:55.516823   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:22:55.516832   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:22:55.539024   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:22:55.539036   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:22:55.552596   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:22:55.552606   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:22:55.569736   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:22:55.569746   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:22:55.581768   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:22:55.581779   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:22:55.585707   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:22:55.585715   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:22:55.597462   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:22:55.597473   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:22:58.123273   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:22:59.176150   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:22:59.176198   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:23:03.125612   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:23:03.125867   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:23:03.150650   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:23:03.150742   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:23:03.170587   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:23:03.170659   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:23:03.182737   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:23:03.182798   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:23:03.193753   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:23:03.193820   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:23:03.204584   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:23:03.204658   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:23:03.215833   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:23:03.215900   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:23:03.226406   14738 logs.go:276] 0 containers: []
	W0819 11:23:03.226418   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:23:03.226476   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:23:03.236880   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:23:03.236896   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:23:03.236902   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:23:03.250862   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:23:03.250871   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:23:03.262307   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:23:03.262318   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:23:03.277245   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:23:03.277257   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:23:03.297219   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:23:03.297229   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:23:03.309813   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:23:03.309824   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:23:03.331642   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:23:03.331653   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:23:03.336453   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:23:03.336460   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:23:03.354740   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:23:03.354751   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:23:03.366659   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:23:03.366671   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:23:03.378782   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:23:03.378795   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:23:03.416764   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:23:03.416776   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:23:03.450973   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:23:03.451000   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:23:03.465206   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:23:03.465217   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:23:03.489576   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:23:03.489586   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:23:03.514812   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:23:03.514822   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:23:04.178502   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:23:04.178651   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:23:04.190171   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:23:04.190246   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:23:04.228321   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:23:04.228398   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:23:04.239392   14497 logs.go:276] 2 containers: [61d0ef3d0f03 c6b78cd6ea44]
	I0819 11:23:04.239467   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:23:04.249657   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:23:04.249720   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:23:04.259739   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:23:04.259807   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:23:04.273429   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:23:04.273503   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:23:04.283620   14497 logs.go:276] 0 containers: []
	W0819 11:23:04.283631   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:23:04.283690   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:23:04.296910   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:23:04.296928   14497 logs.go:123] Gathering logs for coredns [c6b78cd6ea44] ...
	I0819 11:23:04.296934   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6b78cd6ea44"
	I0819 11:23:04.308541   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:23:04.308552   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:23:04.322760   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:23:04.322772   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:23:04.334738   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:23:04.334748   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:23:04.352516   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:23:04.352526   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:23:04.387436   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:23:04.387448   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:23:04.401850   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:23:04.401863   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:23:04.415183   14497 logs.go:123] Gathering logs for coredns [61d0ef3d0f03] ...
	I0819 11:23:04.415197   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d0ef3d0f03"
	I0819 11:23:04.426591   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:23:04.426603   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:23:04.437702   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:23:04.437712   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:23:04.474554   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:23:04.474563   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:23:04.479732   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:23:04.479740   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:23:04.504092   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:23:04.504101   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:23:07.017767   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:23:06.029179   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:23:12.020145   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:23:12.020321   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:23:12.041306   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:23:12.041403   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:23:12.056381   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:23:12.056460   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:23:12.069190   14497 logs.go:276] 2 containers: [61d0ef3d0f03 c6b78cd6ea44]
	I0819 11:23:12.069259   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:23:12.080050   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:23:12.080118   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:23:12.090768   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:23:12.090832   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:23:12.101318   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:23:12.101386   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:23:12.111169   14497 logs.go:276] 0 containers: []
	W0819 11:23:12.111180   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:23:12.111236   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:23:12.121961   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:23:12.121976   14497 logs.go:123] Gathering logs for coredns [c6b78cd6ea44] ...
	I0819 11:23:12.121981   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6b78cd6ea44"
	I0819 11:23:12.133761   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:23:12.133774   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:23:12.146172   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:23:12.146183   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:23:12.158765   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:23:12.158776   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:23:12.163535   14497 logs.go:123] Gathering logs for coredns [61d0ef3d0f03] ...
	I0819 11:23:12.163542   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d0ef3d0f03"
	I0819 11:23:12.177971   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:23:12.177981   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:23:12.192591   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:23:12.192601   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:23:11.030138   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:23:11.030400   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:23:11.057783   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:23:11.057900   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:23:11.075781   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:23:11.075864   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:23:11.088905   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:23:11.088980   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:23:11.100206   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:23:11.100278   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:23:11.118516   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:23:11.118579   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:23:11.136209   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:23:11.136275   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:23:11.146830   14738 logs.go:276] 0 containers: []
	W0819 11:23:11.146844   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:23:11.146897   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:23:11.157123   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:23:11.157143   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:23:11.157150   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:23:11.168844   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:23:11.168855   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:23:11.190659   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:23:11.190669   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:23:11.201974   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:23:11.201986   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:23:11.237140   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:23:11.237153   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:23:11.241391   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:23:11.241399   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:23:11.260153   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:23:11.260163   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:23:11.273861   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:23:11.273871   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:23:11.284906   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:23:11.284915   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:23:11.303401   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:23:11.303412   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:23:11.315707   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:23:11.315716   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:23:11.352664   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:23:11.352670   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:23:11.377526   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:23:11.377537   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:23:11.392368   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:23:11.392382   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:23:11.403869   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:23:11.403880   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:23:11.424873   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:23:11.424885   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:23:12.207811   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:23:12.207825   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:23:12.222599   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:23:12.222610   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:23:12.240023   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:23:12.240035   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:23:12.252940   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:23:12.252950   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:23:12.279926   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:23:12.279936   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:23:12.315820   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:23:12.315828   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:23:14.852421   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:23:13.951928   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:23:19.855039   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:23:19.855235   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:23:19.875739   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:23:19.875819   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:23:19.888875   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:23:19.888949   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:23:19.900513   14497 logs.go:276] 2 containers: [61d0ef3d0f03 c6b78cd6ea44]
	I0819 11:23:19.900578   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:23:19.911607   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:23:19.911681   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:23:19.922031   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:23:19.922095   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:23:19.932572   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:23:19.932642   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:23:19.950801   14497 logs.go:276] 0 containers: []
	W0819 11:23:19.950812   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:23:19.950873   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:23:19.960705   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:23:19.960723   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:23:19.960729   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:23:19.972396   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:23:19.972407   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:23:19.995396   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:23:19.995406   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:23:19.999916   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:23:19.999923   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:23:20.035619   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:23:20.035630   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:23:20.049786   14497 logs.go:123] Gathering logs for coredns [c6b78cd6ea44] ...
	I0819 11:23:20.049796   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6b78cd6ea44"
	I0819 11:23:20.061275   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:23:20.061289   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:23:20.075570   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:23:20.075585   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:23:20.092958   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:23:20.092970   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:23:20.105770   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:23:20.105784   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:23:20.145379   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:23:20.145392   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:23:20.159574   14497 logs.go:123] Gathering logs for coredns [61d0ef3d0f03] ...
	I0819 11:23:20.159589   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d0ef3d0f03"
	I0819 11:23:20.170948   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:23:20.170960   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:23:18.954255   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:23:18.954518   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:23:18.977297   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:23:18.977402   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:23:18.996260   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:23:18.996340   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:23:19.008251   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:23:19.008321   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:23:19.018684   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:23:19.018745   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:23:19.029932   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:23:19.029993   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:23:19.040852   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:23:19.040912   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:23:19.051327   14738 logs.go:276] 0 containers: []
	W0819 11:23:19.051338   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:23:19.051396   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:23:19.061890   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:23:19.061906   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:23:19.061911   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:23:19.100030   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:23:19.100038   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:23:19.115758   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:23:19.115770   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:23:19.141487   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:23:19.141497   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:23:19.145881   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:23:19.145888   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:23:19.171905   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:23:19.171916   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:23:19.187666   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:23:19.187680   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:23:19.205500   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:23:19.205512   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:23:19.251054   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:23:19.251066   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:23:19.265821   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:23:19.265832   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:23:19.278889   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:23:19.278903   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:23:19.290725   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:23:19.290739   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:23:19.311596   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:23:19.311607   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:23:19.326567   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:23:19.326578   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:23:19.338953   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:23:19.338963   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:23:19.360362   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:23:19.360373   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:23:21.873747   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:23:22.687553   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:23:26.875964   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:23:26.876224   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:23:26.902018   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:23:26.902130   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:23:26.916403   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:23:26.916488   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:23:26.929782   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:23:26.929851   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:23:26.940238   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:23:26.940302   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:23:26.950905   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:23:26.950968   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:23:26.961989   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:23:26.962051   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:23:26.972013   14738 logs.go:276] 0 containers: []
	W0819 11:23:26.972024   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:23:26.972082   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:23:26.982193   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:23:26.982210   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:23:26.982216   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:23:26.997233   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:23:26.997245   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:23:27.022230   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:23:27.022240   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:23:27.035037   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:23:27.035049   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:23:27.072192   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:23:27.072206   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:23:27.087290   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:23:27.087300   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:23:27.098823   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:23:27.098836   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:23:27.110744   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:23:27.110754   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:23:27.122385   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:23:27.122398   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:23:27.126773   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:23:27.126779   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:23:27.144016   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:23:27.144029   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:23:27.156175   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:23:27.156186   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:23:27.170261   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:23:27.170279   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:23:27.196037   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:23:27.196048   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:23:27.221181   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:23:27.221192   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:23:27.232922   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:23:27.232933   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:23:27.689444   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:23:27.689599   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:23:27.703762   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:23:27.703841   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:23:27.716961   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:23:27.717028   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:23:27.728381   14497 logs.go:276] 2 containers: [61d0ef3d0f03 c6b78cd6ea44]
	I0819 11:23:27.728446   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:23:27.738999   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:23:27.739065   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:23:27.749947   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:23:27.750014   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:23:27.760313   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:23:27.760379   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:23:27.769962   14497 logs.go:276] 0 containers: []
	W0819 11:23:27.769973   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:23:27.770030   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:23:27.780005   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:23:27.780020   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:23:27.780025   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:23:27.795923   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:23:27.795933   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:23:27.808043   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:23:27.808053   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:23:27.825964   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:23:27.825978   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:23:27.844566   14497 logs.go:123] Gathering logs for coredns [61d0ef3d0f03] ...
	I0819 11:23:27.844576   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d0ef3d0f03"
	I0819 11:23:27.857908   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:23:27.857919   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:23:27.900979   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:23:27.900994   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:23:27.915102   14497 logs.go:123] Gathering logs for coredns [c6b78cd6ea44] ...
	I0819 11:23:27.915112   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6b78cd6ea44"
	I0819 11:23:27.926378   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:23:27.926391   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:23:27.938099   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:23:27.938113   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:23:27.961545   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:23:27.961556   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:23:27.972734   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:23:27.972745   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:23:28.008489   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:23:28.008499   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:23:30.514950   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:23:29.773331   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:23:35.517255   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:23:35.517434   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:23:35.534507   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:23:35.534604   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:23:35.548381   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:23:35.548454   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:23:35.560285   14497 logs.go:276] 2 containers: [61d0ef3d0f03 c6b78cd6ea44]
	I0819 11:23:35.560357   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:23:35.570696   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:23:35.570760   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:23:35.583144   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:23:35.583211   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:23:35.595077   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:23:35.595141   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:23:35.605098   14497 logs.go:276] 0 containers: []
	W0819 11:23:35.605110   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:23:35.605168   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:23:35.615398   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:23:35.615412   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:23:35.615418   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:23:35.658981   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:23:35.658993   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:23:35.673735   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:23:35.673745   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:23:35.689475   14497 logs.go:123] Gathering logs for coredns [c6b78cd6ea44] ...
	I0819 11:23:35.689486   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6b78cd6ea44"
	I0819 11:23:35.701328   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:23:35.701339   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:23:35.719914   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:23:35.719925   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:23:35.731596   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:23:35.731607   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:23:35.749524   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:23:35.749533   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:23:35.761460   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:23:35.761477   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:23:35.799444   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:23:35.799457   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:23:35.804277   14497 logs.go:123] Gathering logs for coredns [61d0ef3d0f03] ...
	I0819 11:23:35.804287   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d0ef3d0f03"
	I0819 11:23:35.816192   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:23:35.816205   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:23:35.829409   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:23:35.829420   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:23:34.774041   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:23:34.774337   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:23:34.801037   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:23:34.801161   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:23:34.821130   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:23:34.821213   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:23:34.833561   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:23:34.833637   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:23:34.844503   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:23:34.844576   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:23:34.854964   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:23:34.855033   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:23:34.870024   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:23:34.870091   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:23:34.880549   14738 logs.go:276] 0 containers: []
	W0819 11:23:34.880560   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:23:34.880612   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:23:34.890950   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:23:34.890967   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:23:34.890972   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:23:34.895674   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:23:34.895681   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:23:34.907314   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:23:34.907326   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:23:34.920081   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:23:34.920092   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:23:34.932697   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:23:34.932708   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:23:34.962301   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:23:34.962308   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:23:34.976268   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:23:34.976278   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:23:35.001427   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:23:35.001437   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:23:35.012774   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:23:35.012785   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:23:35.034988   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:23:35.034999   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:23:35.047819   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:23:35.047829   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:23:35.059691   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:23:35.059702   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:23:35.096752   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:23:35.096764   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:23:35.111423   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:23:35.111432   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:23:35.134165   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:23:35.134180   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:23:35.173706   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:23:35.173716   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:23:37.689676   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:23:38.355588   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:23:42.692026   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:23:42.692224   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:23:42.711456   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:23:42.711543   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:23:42.722614   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:23:42.722679   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:23:42.733559   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:23:42.733627   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:23:42.744024   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:23:42.744098   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:23:42.755024   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:23:42.755089   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:23:42.765856   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:23:42.765928   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:23:42.776445   14738 logs.go:276] 0 containers: []
	W0819 11:23:42.776455   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:23:42.776513   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:23:42.786856   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:23:42.786872   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:23:42.786877   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:23:42.791261   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:23:42.791270   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:23:42.827396   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:23:42.827411   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:23:42.842028   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:23:42.842040   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:23:42.873449   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:23:42.873462   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:23:42.897771   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:23:42.897779   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:23:42.909195   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:23:42.909206   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:23:42.935880   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:23:42.935892   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:23:42.947425   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:23:42.947438   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:23:42.984578   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:23:42.984586   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:23:42.995818   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:23:42.995829   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:23:43.007019   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:23:43.007029   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:23:43.025042   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:23:43.025056   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:23:43.039474   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:23:43.039487   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:23:43.053873   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:23:43.053886   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:23:43.071705   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:23:43.071715   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:23:43.357964   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:23:43.358061   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:23:43.371745   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:23:43.371819   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:23:43.383242   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:23:43.383313   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:23:43.394210   14497 logs.go:276] 2 containers: [61d0ef3d0f03 c6b78cd6ea44]
	I0819 11:23:43.394284   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:23:43.404922   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:23:43.404990   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:23:43.417228   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:23:43.417300   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:23:43.430177   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:23:43.430237   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:23:43.440306   14497 logs.go:276] 0 containers: []
	W0819 11:23:43.440316   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:23:43.440371   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:23:43.450861   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:23:43.450883   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:23:43.450890   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:23:43.476065   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:23:43.476076   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:23:43.487207   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:23:43.487224   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:23:43.524968   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:23:43.524979   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:23:43.539352   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:23:43.539362   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:23:43.552611   14497 logs.go:123] Gathering logs for coredns [61d0ef3d0f03] ...
	I0819 11:23:43.552622   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d0ef3d0f03"
	I0819 11:23:43.564587   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:23:43.564598   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:23:43.576564   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:23:43.576576   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:23:43.595000   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:23:43.595011   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:23:43.615224   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:23:43.615237   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:23:43.652907   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:23:43.652918   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:23:43.657754   14497 logs.go:123] Gathering logs for coredns [c6b78cd6ea44] ...
	I0819 11:23:43.657761   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6b78cd6ea44"
	I0819 11:23:43.670123   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:23:43.670134   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:23:46.193921   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:23:45.588118   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:23:51.196185   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:23:51.196312   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:23:51.208629   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:23:51.208702   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:23:51.219282   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:23:51.219355   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:23:51.230063   14497 logs.go:276] 2 containers: [61d0ef3d0f03 c6b78cd6ea44]
	I0819 11:23:51.230126   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:23:51.246680   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:23:51.246751   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:23:51.257603   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:23:51.257668   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:23:51.268222   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:23:51.268280   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:23:51.278129   14497 logs.go:276] 0 containers: []
	W0819 11:23:51.278141   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:23:51.278197   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:23:51.288803   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:23:51.288816   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:23:51.288821   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:23:51.306537   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:23:51.306548   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:23:51.330314   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:23:51.330322   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:23:51.342256   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:23:51.342267   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:23:51.347035   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:23:51.347041   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:23:51.381915   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:23:51.381925   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:23:51.395738   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:23:51.395748   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:23:51.407572   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:23:51.407582   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:23:51.421288   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:23:51.421298   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:23:51.434224   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:23:51.434236   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:23:51.472697   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:23:51.472705   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:23:51.488691   14497 logs.go:123] Gathering logs for coredns [61d0ef3d0f03] ...
	I0819 11:23:51.488701   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d0ef3d0f03"
	I0819 11:23:51.500632   14497 logs.go:123] Gathering logs for coredns [c6b78cd6ea44] ...
	I0819 11:23:51.500642   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6b78cd6ea44"
	I0819 11:23:50.590694   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:23:50.591000   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:23:50.618615   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:23:50.618746   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:23:50.639591   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:23:50.639670   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:23:50.653379   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:23:50.653455   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:23:50.669155   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:23:50.669227   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:23:50.679455   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:23:50.679526   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:23:50.690105   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:23:50.690180   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:23:50.700779   14738 logs.go:276] 0 containers: []
	W0819 11:23:50.700790   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:23:50.700849   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:23:50.712300   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:23:50.712324   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:23:50.712331   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:23:50.723846   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:23:50.723859   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:23:50.728441   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:23:50.728449   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:23:50.763154   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:23:50.763167   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:23:50.777116   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:23:50.777127   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:23:50.788729   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:23:50.788743   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:23:50.811630   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:23:50.811637   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:23:50.823109   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:23:50.823120   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:23:50.844261   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:23:50.844275   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:23:50.859006   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:23:50.859019   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:23:50.872358   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:23:50.872369   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:23:50.900607   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:23:50.900618   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:23:50.914445   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:23:50.914455   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:23:50.933764   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:23:50.933774   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:23:50.971903   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:23:50.971915   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:23:50.985851   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:23:50.985863   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:23:53.497951   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:23:54.013813   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:23:58.500245   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:23:58.500596   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:23:58.533472   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:23:58.533604   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:23:58.552452   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:23:58.552547   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:23:58.566391   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:23:58.566466   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:23:58.578160   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:23:58.578231   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:23:58.588251   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:23:58.588330   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:23:58.598734   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:23:58.598800   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:23:58.608620   14738 logs.go:276] 0 containers: []
	W0819 11:23:58.608632   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:23:58.608689   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:23:58.619275   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:23:58.619293   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:23:58.619299   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:23:58.623926   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:23:58.623933   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:23:59.016074   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:23:59.016179   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:23:59.027090   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:23:59.027154   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:23:59.041775   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:23:59.041839   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:23:59.057012   14497 logs.go:276] 2 containers: [61d0ef3d0f03 c6b78cd6ea44]
	I0819 11:23:59.057080   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:23:59.067370   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:23:59.067434   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:23:59.077774   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:23:59.077838   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:23:59.090281   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:23:59.090345   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:23:59.100217   14497 logs.go:276] 0 containers: []
	W0819 11:23:59.100227   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:23:59.100275   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:23:59.110492   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:23:59.110505   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:23:59.110511   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:23:59.122818   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:23:59.122832   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:23:59.160133   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:23:59.160143   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:23:59.178108   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:23:59.178122   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:23:59.192862   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:23:59.192872   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:23:59.207003   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:23:59.207015   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:23:59.218583   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:23:59.218597   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:23:59.236285   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:23:59.236299   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:23:59.259365   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:23:59.259374   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:23:59.264358   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:23:59.264365   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:23:59.301870   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:23:59.301880   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:23:59.316484   14497 logs.go:123] Gathering logs for coredns [61d0ef3d0f03] ...
	I0819 11:23:59.316499   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d0ef3d0f03"
	I0819 11:23:59.327935   14497 logs.go:123] Gathering logs for coredns [c6b78cd6ea44] ...
	I0819 11:23:59.327949   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6b78cd6ea44"
	I0819 11:24:01.847552   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:23:58.658316   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:23:58.658327   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:23:58.672957   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:23:58.672967   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:23:58.690819   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:23:58.690829   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:23:58.729638   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:23:58.729646   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:23:58.741378   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:23:58.741389   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:23:58.752758   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:23:58.752768   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:23:58.776882   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:23:58.776889   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:23:58.790129   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:23:58.790140   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:23:58.802877   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:23:58.802888   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:23:58.815148   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:23:58.815160   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:23:58.839432   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:23:58.839441   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:23:58.853121   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:23:58.853131   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:23:58.875093   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:23:58.875103   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:23:58.887226   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:23:58.887237   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:24:01.411155   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:24:06.849755   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:24:06.849855   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:24:06.861562   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:24:06.861634   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:24:06.872446   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:24:06.872521   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:24:06.883408   14497 logs.go:276] 4 containers: [b018f83efc45 31df3e5d6111 61d0ef3d0f03 c6b78cd6ea44]
	I0819 11:24:06.883493   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:24:06.899372   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:24:06.899441   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:24:06.910324   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:24:06.910389   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:24:06.926720   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:24:06.926793   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:24:06.936636   14497 logs.go:276] 0 containers: []
	W0819 11:24:06.936648   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:24:06.936707   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:24:06.947325   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:24:06.947342   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:24:06.947347   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:24:06.952260   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:24:06.952267   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:24:06.964160   14497 logs.go:123] Gathering logs for coredns [b018f83efc45] ...
	I0819 11:24:06.964172   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b018f83efc45"
	I0819 11:24:06.975637   14497 logs.go:123] Gathering logs for coredns [31df3e5d6111] ...
	I0819 11:24:06.975649   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31df3e5d6111"
	I0819 11:24:06.986962   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:24:06.986979   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:24:07.001697   14497 logs.go:123] Gathering logs for coredns [c6b78cd6ea44] ...
	I0819 11:24:07.001707   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6b78cd6ea44"
	I0819 11:24:07.017074   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:24:07.017087   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:24:07.029859   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:24:07.029872   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:24:07.046805   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:24:07.046815   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:24:07.070621   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:24:07.070632   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:24:07.082081   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:24:07.082091   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:24:07.118074   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:24:07.118084   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:24:07.132512   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:24:07.132525   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:24:07.146319   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:24:07.146330   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:24:07.182816   14497 logs.go:123] Gathering logs for coredns [61d0ef3d0f03] ...
	I0819 11:24:07.182830   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d0ef3d0f03"
	I0819 11:24:06.413857   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:24:06.414042   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:24:06.439236   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:24:06.439350   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:24:06.456157   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:24:06.456231   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:24:06.469380   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:24:06.469454   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:24:06.481282   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:24:06.481349   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:24:06.498021   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:24:06.498084   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:24:06.508638   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:24:06.508707   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:24:06.518584   14738 logs.go:276] 0 containers: []
	W0819 11:24:06.518595   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:24:06.518648   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:24:06.528772   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:24:06.528791   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:24:06.528796   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:24:06.540326   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:24:06.540336   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:24:06.565261   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:24:06.565270   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:24:06.577049   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:24:06.577060   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:24:06.597327   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:24:06.597336   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:24:06.610602   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:24:06.610615   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:24:06.635959   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:24:06.635969   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:24:06.640279   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:24:06.640287   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:24:06.676337   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:24:06.676349   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:24:06.691077   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:24:06.691088   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:24:06.705023   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:24:06.705034   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:24:06.720079   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:24:06.720092   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:24:06.733812   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:24:06.733824   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:24:06.745934   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:24:06.745945   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:24:06.782006   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:24:06.782022   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:24:06.793052   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:24:06.793066   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:24:09.696838   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:24:09.315836   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:24:14.699062   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:24:14.699155   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:24:14.710497   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:24:14.710566   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:24:14.721044   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:24:14.721112   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:24:14.731992   14497 logs.go:276] 4 containers: [b018f83efc45 31df3e5d6111 61d0ef3d0f03 c6b78cd6ea44]
	I0819 11:24:14.732059   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:24:14.742808   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:24:14.742877   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:24:14.753145   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:24:14.753203   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:24:14.763445   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:24:14.763507   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:24:14.773631   14497 logs.go:276] 0 containers: []
	W0819 11:24:14.773644   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:24:14.773702   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:24:14.785763   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:24:14.785778   14497 logs.go:123] Gathering logs for coredns [b018f83efc45] ...
	I0819 11:24:14.785783   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b018f83efc45"
	I0819 11:24:14.797377   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:24:14.797389   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:24:14.811843   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:24:14.811853   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:24:14.847816   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:24:14.847824   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:24:14.882593   14497 logs.go:123] Gathering logs for coredns [31df3e5d6111] ...
	I0819 11:24:14.882604   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31df3e5d6111"
	I0819 11:24:14.894626   14497 logs.go:123] Gathering logs for coredns [61d0ef3d0f03] ...
	I0819 11:24:14.894637   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d0ef3d0f03"
	I0819 11:24:14.906344   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:24:14.906354   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:24:14.935484   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:24:14.935499   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:24:14.950180   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:24:14.950191   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:24:14.968143   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:24:14.968156   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:24:14.984206   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:24:14.984217   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:24:14.988833   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:24:14.988840   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:24:15.002932   14497 logs.go:123] Gathering logs for coredns [c6b78cd6ea44] ...
	I0819 11:24:15.002943   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6b78cd6ea44"
	I0819 11:24:15.014813   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:24:15.014826   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:24:15.026927   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:24:15.026938   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:24:14.318145   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:24:14.318313   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:24:14.333107   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:24:14.333184   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:24:14.344985   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:24:14.345052   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:24:14.355387   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:24:14.355453   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:24:14.365909   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:24:14.365975   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:24:14.375884   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:24:14.375945   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:24:14.386486   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:24:14.386547   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:24:14.396633   14738 logs.go:276] 0 containers: []
	W0819 11:24:14.396643   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:24:14.396695   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:24:14.409329   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:24:14.409346   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:24:14.409353   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:24:14.447147   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:24:14.447154   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:24:14.482437   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:24:14.482448   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:24:14.494110   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:24:14.494124   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:24:14.506563   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:24:14.506574   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:24:14.517944   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:24:14.517958   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:24:14.532133   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:24:14.532144   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:24:14.556430   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:24:14.556443   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:24:14.570534   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:24:14.570543   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:24:14.584763   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:24:14.584774   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:24:14.597113   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:24:14.597122   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:24:14.601164   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:24:14.601172   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:24:14.622449   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:24:14.622459   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:24:14.637468   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:24:14.637481   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:24:14.656197   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:24:14.656208   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:24:14.668120   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:24:14.668131   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:24:17.194282   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:24:17.541225   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:24:22.195238   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0819 11:24:22.195338   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:24:22.207173   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:24:22.207245   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:24:22.218898   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:24:22.218962   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:24:22.229663   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:24:22.229733   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:24:22.240194   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:24:22.240258   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:24:22.250329   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:24:22.250393   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:24:22.261084   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:24:22.261150   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:24:22.271395   14738 logs.go:276] 0 containers: []
	W0819 11:24:22.271405   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:24:22.271463   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:24:22.281729   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:24:22.281749   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:24:22.281755   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:24:22.299287   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:24:22.299298   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:24:22.313187   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:24:22.313199   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:24:22.326077   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:24:22.326088   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:24:22.337665   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:24:22.337675   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:24:22.363194   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:24:22.363204   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:24:22.374743   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:24:22.374755   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:24:22.392263   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:24:22.392273   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:24:22.403763   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:24:22.403774   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:24:22.438057   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:24:22.438068   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:24:22.449172   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:24:22.449182   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:24:22.487128   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:24:22.487135   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:24:22.491607   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:24:22.491615   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:24:22.526411   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:24:22.526430   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:24:22.540775   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:24:22.540784   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:24:22.552871   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:24:22.552884   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:24:22.543591   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:24:22.543678   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:24:22.555014   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:24:22.555084   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:24:22.567038   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:24:22.567109   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:24:22.578390   14497 logs.go:276] 4 containers: [b018f83efc45 31df3e5d6111 61d0ef3d0f03 c6b78cd6ea44]
	I0819 11:24:22.578466   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:24:22.588860   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:24:22.588927   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:24:22.599048   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:24:22.599114   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:24:22.609935   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:24:22.610002   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:24:22.620278   14497 logs.go:276] 0 containers: []
	W0819 11:24:22.620290   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:24:22.620345   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:24:22.630762   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:24:22.630777   14497 logs.go:123] Gathering logs for coredns [31df3e5d6111] ...
	I0819 11:24:22.630783   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31df3e5d6111"
	I0819 11:24:22.642352   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:24:22.642363   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:24:22.654443   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:24:22.654454   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:24:22.672411   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:24:22.672421   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:24:22.685118   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:24:22.685128   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:24:22.689567   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:24:22.689576   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:24:22.709155   14497 logs.go:123] Gathering logs for coredns [61d0ef3d0f03] ...
	I0819 11:24:22.709167   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d0ef3d0f03"
	I0819 11:24:22.724350   14497 logs.go:123] Gathering logs for coredns [c6b78cd6ea44] ...
	I0819 11:24:22.724361   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6b78cd6ea44"
	I0819 11:24:22.736023   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:24:22.736034   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:24:22.774951   14497 logs.go:123] Gathering logs for coredns [b018f83efc45] ...
	I0819 11:24:22.774961   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b018f83efc45"
	I0819 11:24:22.786650   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:24:22.786660   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:24:22.798579   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:24:22.798589   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:24:22.822520   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:24:22.822529   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:24:22.860249   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:24:22.860258   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:24:22.874973   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:24:22.874986   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:24:25.392002   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:24:25.078547   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:24:30.394286   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:24:30.394399   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:24:30.406020   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:24:30.406099   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:24:30.417248   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:24:30.417320   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:24:30.429562   14497 logs.go:276] 4 containers: [b018f83efc45 31df3e5d6111 61d0ef3d0f03 c6b78cd6ea44]
	I0819 11:24:30.429636   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:24:30.440828   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:24:30.440901   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:24:30.452104   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:24:30.452178   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:24:30.463549   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:24:30.463615   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:24:30.479134   14497 logs.go:276] 0 containers: []
	W0819 11:24:30.479145   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:24:30.479202   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:24:30.490388   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:24:30.490403   14497 logs.go:123] Gathering logs for coredns [61d0ef3d0f03] ...
	I0819 11:24:30.490408   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d0ef3d0f03"
	I0819 11:24:30.501834   14497 logs.go:123] Gathering logs for coredns [c6b78cd6ea44] ...
	I0819 11:24:30.501843   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6b78cd6ea44"
	I0819 11:24:30.513493   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:24:30.513504   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:24:30.517974   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:24:30.517980   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:24:30.552178   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:24:30.552188   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:24:30.566198   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:24:30.566208   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:24:30.581962   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:24:30.581974   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:24:30.595997   14497 logs.go:123] Gathering logs for coredns [31df3e5d6111] ...
	I0819 11:24:30.596006   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31df3e5d6111"
	I0819 11:24:30.607436   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:24:30.607447   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:24:30.623024   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:24:30.623041   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:24:30.635295   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:24:30.635309   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:24:30.672981   14497 logs.go:123] Gathering logs for coredns [b018f83efc45] ...
	I0819 11:24:30.672992   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b018f83efc45"
	I0819 11:24:30.685435   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:24:30.685447   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:24:30.703950   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:24:30.703962   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:24:30.715942   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:24:30.715953   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:24:30.080619   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:24:30.080778   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:24:30.097381   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:24:30.097470   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:24:30.112021   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:24:30.112092   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:24:30.123515   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:24:30.123585   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:24:30.134445   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:24:30.134514   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:24:30.146475   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:24:30.146536   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:24:30.156588   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:24:30.156650   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:24:30.167651   14738 logs.go:276] 0 containers: []
	W0819 11:24:30.167663   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:24:30.167718   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:24:30.182527   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:24:30.182546   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:24:30.182552   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:24:30.221669   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:24:30.221680   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:24:30.226618   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:24:30.226626   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:24:30.261787   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:24:30.261798   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:24:30.275662   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:24:30.275673   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:24:30.287679   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:24:30.287690   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:24:30.302329   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:24:30.302341   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:24:30.323046   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:24:30.323055   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:24:30.334935   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:24:30.334946   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:24:30.360247   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:24:30.360259   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:24:30.374855   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:24:30.374865   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:24:30.386347   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:24:30.386358   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:24:30.404033   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:24:30.404047   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:24:30.421602   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:24:30.421614   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:24:30.446416   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:24:30.446434   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:24:30.462146   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:24:30.462158   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:24:32.977219   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:24:33.243139   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:24:37.979820   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:24:37.980061   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:24:38.001358   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:24:38.001460   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:24:38.021560   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:24:38.021636   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:24:38.034212   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:24:38.034275   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:24:38.045304   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:24:38.045374   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:24:38.055608   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:24:38.055674   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:24:38.066197   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:24:38.066268   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:24:38.076473   14738 logs.go:276] 0 containers: []
	W0819 11:24:38.076484   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:24:38.076543   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:24:38.086586   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:24:38.086604   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:24:38.086632   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:24:38.091059   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:24:38.091065   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:24:38.112614   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:24:38.112625   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:24:38.129703   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:24:38.129714   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:24:38.153887   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:24:38.153896   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:24:38.194704   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:24:38.194715   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:24:38.210363   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:24:38.210373   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:24:38.224726   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:24:38.224737   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:24:38.236391   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:24:38.236403   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:24:38.248870   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:24:38.248886   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:24:38.262402   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:24:38.262415   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:24:38.294624   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:24:38.294639   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:24:38.333073   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:24:38.333095   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:24:38.348372   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:24:38.348390   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:24:38.360608   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:24:38.360621   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:24:38.386760   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:24:38.386782   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:24:38.245354   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:24:38.245451   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:24:38.257171   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:24:38.257246   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:24:38.269006   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:24:38.269078   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:24:38.280120   14497 logs.go:276] 4 containers: [b018f83efc45 31df3e5d6111 61d0ef3d0f03 c6b78cd6ea44]
	I0819 11:24:38.280191   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:24:38.291234   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:24:38.291307   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:24:38.303342   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:24:38.303412   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:24:38.314391   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:24:38.314463   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:24:38.326208   14497 logs.go:276] 0 containers: []
	W0819 11:24:38.326220   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:24:38.326282   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:24:38.337648   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:24:38.337666   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:24:38.337672   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:24:38.342557   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:24:38.342569   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:24:38.381195   14497 logs.go:123] Gathering logs for coredns [b018f83efc45] ...
	I0819 11:24:38.381208   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b018f83efc45"
	I0819 11:24:38.393758   14497 logs.go:123] Gathering logs for coredns [61d0ef3d0f03] ...
	I0819 11:24:38.393771   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d0ef3d0f03"
	I0819 11:24:38.406783   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:24:38.406793   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:24:38.433175   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:24:38.433185   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:24:38.444787   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:24:38.444798   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:24:38.458950   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:24:38.458960   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:24:38.471016   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:24:38.471025   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:24:38.510088   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:24:38.510096   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:24:38.531579   14497 logs.go:123] Gathering logs for coredns [31df3e5d6111] ...
	I0819 11:24:38.531593   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31df3e5d6111"
	I0819 11:24:38.544245   14497 logs.go:123] Gathering logs for coredns [c6b78cd6ea44] ...
	I0819 11:24:38.544256   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6b78cd6ea44"
	I0819 11:24:38.556013   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:24:38.556027   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:24:38.574601   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:24:38.574611   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:24:38.586127   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:24:38.586136   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:24:41.111404   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:24:40.907420   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:24:46.112319   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:24:46.112397   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:24:46.123861   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:24:46.123937   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:24:46.141455   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:24:46.141528   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:24:46.152926   14497 logs.go:276] 4 containers: [b018f83efc45 31df3e5d6111 61d0ef3d0f03 c6b78cd6ea44]
	I0819 11:24:46.153003   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:24:46.164426   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:24:46.164499   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:24:46.177395   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:24:46.177462   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:24:46.189470   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:24:46.189554   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:24:46.201366   14497 logs.go:276] 0 containers: []
	W0819 11:24:46.201379   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:24:46.201445   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:24:46.218001   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:24:46.218020   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:24:46.218026   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:24:46.244108   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:24:46.244121   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:24:46.256951   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:24:46.256963   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:24:46.277503   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:24:46.277511   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:24:46.289976   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:24:46.289987   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:24:46.304958   14497 logs.go:123] Gathering logs for coredns [b018f83efc45] ...
	I0819 11:24:46.304969   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b018f83efc45"
	I0819 11:24:46.317704   14497 logs.go:123] Gathering logs for coredns [31df3e5d6111] ...
	I0819 11:24:46.317715   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31df3e5d6111"
	I0819 11:24:46.332497   14497 logs.go:123] Gathering logs for coredns [61d0ef3d0f03] ...
	I0819 11:24:46.332509   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d0ef3d0f03"
	I0819 11:24:46.345624   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:24:46.345635   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:24:46.357608   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:24:46.357621   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:24:46.375162   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:24:46.375172   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:24:46.380098   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:24:46.380105   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:24:46.415064   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:24:46.415075   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:24:46.429698   14497 logs.go:123] Gathering logs for coredns [c6b78cd6ea44] ...
	I0819 11:24:46.429708   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6b78cd6ea44"
	I0819 11:24:46.441651   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:24:46.441665   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:24:45.909853   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:24:45.910175   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:24:45.939135   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:24:45.939274   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:24:45.957785   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:24:45.957863   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:24:45.971237   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:24:45.971311   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:24:45.983393   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:24:45.983466   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:24:45.994107   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:24:45.994180   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:24:46.005791   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:24:46.005858   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:24:46.016218   14738 logs.go:276] 0 containers: []
	W0819 11:24:46.016229   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:24:46.016285   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:24:46.027537   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:24:46.027555   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:24:46.027561   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:24:46.051625   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:24:46.051636   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:24:46.065366   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:24:46.065376   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:24:46.076904   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:24:46.076914   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:24:46.088458   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:24:46.088469   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:24:46.112139   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:24:46.112148   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:24:46.149096   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:24:46.149111   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:24:46.164606   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:24:46.164617   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:24:46.187849   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:24:46.187868   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:24:46.209245   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:24:46.209263   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:24:46.232819   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:24:46.232831   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:24:46.237493   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:24:46.237500   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:24:46.252926   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:24:46.252941   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:24:46.276220   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:24:46.276232   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:24:46.294566   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:24:46.294577   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:24:46.335804   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:24:46.335814   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:24:48.979546   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:24:48.850333   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:24:53.981568   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:24:53.981655   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:24:53.993041   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:24:53.993112   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:24:54.004648   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:24:54.004735   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:24:54.018592   14497 logs.go:276] 4 containers: [b018f83efc45 31df3e5d6111 61d0ef3d0f03 c6b78cd6ea44]
	I0819 11:24:54.018667   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:24:54.029783   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:24:54.029850   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:24:54.049128   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:24:54.049195   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:24:54.065094   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:24:54.065166   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:24:54.076205   14497 logs.go:276] 0 containers: []
	W0819 11:24:54.076216   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:24:54.076277   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:24:54.097554   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:24:54.097571   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:24:54.097577   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:24:54.140306   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:24:54.140319   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:24:54.180502   14497 logs.go:123] Gathering logs for coredns [31df3e5d6111] ...
	I0819 11:24:54.180515   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31df3e5d6111"
	I0819 11:24:54.192537   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:24:54.192545   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:24:54.210178   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:24:54.210193   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:24:54.232096   14497 logs.go:123] Gathering logs for coredns [b018f83efc45] ...
	I0819 11:24:54.232108   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b018f83efc45"
	I0819 11:24:54.245312   14497 logs.go:123] Gathering logs for coredns [61d0ef3d0f03] ...
	I0819 11:24:54.245328   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d0ef3d0f03"
	I0819 11:24:54.258578   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:24:54.258596   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:24:54.278133   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:24:54.278141   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:24:54.290649   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:24:54.290661   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:24:54.296023   14497 logs.go:123] Gathering logs for coredns [c6b78cd6ea44] ...
	I0819 11:24:54.296031   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6b78cd6ea44"
	I0819 11:24:54.308342   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:24:54.308352   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:24:54.321479   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:24:54.321489   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:24:54.345790   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:24:54.345796   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:24:54.358300   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:24:54.358316   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:24:56.874289   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:24:53.852963   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:24:53.853246   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:24:53.884926   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:24:53.885050   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:24:53.903871   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:24:53.903982   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:24:53.918258   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:24:53.918326   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:24:53.940838   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:24:53.940905   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:24:53.951048   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:24:53.951107   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:24:53.961881   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:24:53.961948   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:24:53.971834   14738 logs.go:276] 0 containers: []
	W0819 11:24:53.971846   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:24:53.971900   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:24:53.982007   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:24:53.982021   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:24:53.982026   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:24:53.995058   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:24:53.995069   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:24:54.007722   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:24:54.007733   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:24:54.026463   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:24:54.026477   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:24:54.031414   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:24:54.031422   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:24:54.050789   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:24:54.050800   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:24:54.066607   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:24:54.066617   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:24:54.107212   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:24:54.107227   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:24:54.125405   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:24:54.125419   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:24:54.164467   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:24:54.164480   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:24:54.191070   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:24:54.191086   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:24:54.218583   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:24:54.218602   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:24:54.234308   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:24:54.234318   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:24:54.247892   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:24:54.247903   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:24:54.262965   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:24:54.262976   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:24:54.275833   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:24:54.275844   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:24:56.803970   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:25:01.876578   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:25:01.876643   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:25:01.894868   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:25:01.894943   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:25:01.906584   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:25:01.906653   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:25:01.918191   14497 logs.go:276] 4 containers: [b018f83efc45 31df3e5d6111 61d0ef3d0f03 c6b78cd6ea44]
	I0819 11:25:01.918264   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:25:01.930273   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:25:01.930338   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:25:01.941419   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:25:01.941484   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:25:01.952636   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:25:01.952705   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:25:01.964530   14497 logs.go:276] 0 containers: []
	W0819 11:25:01.964540   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:25:01.964600   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:25:01.980245   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:25:01.980262   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:25:01.980267   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:25:01.996061   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:25:01.996072   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:25:02.023222   14497 logs.go:123] Gathering logs for coredns [b018f83efc45] ...
	I0819 11:25:02.023237   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b018f83efc45"
	I0819 11:25:02.036967   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:25:02.036984   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:25:02.053018   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:25:02.053032   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:25:02.067155   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:25:02.067167   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:25:02.115784   14497 logs.go:123] Gathering logs for coredns [31df3e5d6111] ...
	I0819 11:25:02.115795   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31df3e5d6111"
	I0819 11:25:02.129564   14497 logs.go:123] Gathering logs for coredns [61d0ef3d0f03] ...
	I0819 11:25:02.129576   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d0ef3d0f03"
	I0819 11:25:02.142865   14497 logs.go:123] Gathering logs for coredns [c6b78cd6ea44] ...
	I0819 11:25:02.142879   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6b78cd6ea44"
	I0819 11:25:02.155691   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:25:02.155705   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:25:02.175122   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:25:02.175140   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:25:01.806295   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:25:01.806770   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:25:01.850344   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:25:01.850498   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:25:01.875629   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:25:01.875719   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:25:01.889757   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:25:01.889836   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:25:01.902675   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:25:01.902751   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:25:01.913946   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:25:01.914018   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:25:01.925717   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:25:01.925788   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:25:01.937148   14738 logs.go:276] 0 containers: []
	W0819 11:25:01.937161   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:25:01.937219   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:25:01.949333   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:25:01.949353   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:25:01.949358   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:25:01.965036   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:25:01.965049   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:25:01.978964   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:25:01.978977   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:25:02.001583   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:25:02.001592   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:25:02.006506   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:25:02.006513   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:25:02.043059   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:25:02.043071   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:25:02.060264   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:25:02.060277   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:25:02.084706   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:25:02.084722   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:25:02.125749   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:25:02.125765   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:25:02.138449   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:25:02.138461   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:25:02.157740   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:25:02.157750   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:25:02.183866   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:25:02.183878   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:25:02.198075   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:25:02.198088   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:25:02.218156   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:25:02.218169   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:25:02.238215   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:25:02.238226   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:25:02.251494   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:25:02.251506   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:25:02.214480   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:25:02.214498   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:25:02.220230   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:25:02.220242   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:25:02.269176   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:25:02.269187   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:25:02.281656   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:25:02.281671   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:25:04.793525   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:25:04.768254   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:25:09.795751   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:25:09.795985   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:25:09.818043   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:25:09.818135   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:25:09.833665   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:25:09.833747   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:25:09.847914   14497 logs.go:276] 4 containers: [b018f83efc45 31df3e5d6111 61d0ef3d0f03 c6b78cd6ea44]
	I0819 11:25:09.847987   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:25:09.861130   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:25:09.861199   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:25:09.883193   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:25:09.883246   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:25:09.895232   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:25:09.895299   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:25:09.906523   14497 logs.go:276] 0 containers: []
	W0819 11:25:09.906534   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:25:09.906593   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:25:09.917565   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:25:09.917581   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:25:09.917587   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:25:09.922358   14497 logs.go:123] Gathering logs for coredns [c6b78cd6ea44] ...
	I0819 11:25:09.922368   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6b78cd6ea44"
	I0819 11:25:09.934745   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:25:09.934761   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:25:09.953178   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:25:09.953190   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:25:09.968504   14497 logs.go:123] Gathering logs for coredns [61d0ef3d0f03] ...
	I0819 11:25:09.968516   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d0ef3d0f03"
	I0819 11:25:09.981193   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:25:09.981205   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:25:09.993485   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:25:09.993497   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:25:10.007243   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:25:10.007255   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:25:10.033213   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:25:10.033228   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:25:10.050073   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:25:10.050088   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:25:10.088111   14497 logs.go:123] Gathering logs for coredns [b018f83efc45] ...
	I0819 11:25:10.088143   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b018f83efc45"
	I0819 11:25:10.101725   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:25:10.101737   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:25:10.118249   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:25:10.118261   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:25:10.162783   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:25:10.162796   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:25:10.177852   14497 logs.go:123] Gathering logs for coredns [31df3e5d6111] ...
	I0819 11:25:10.177860   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31df3e5d6111"
	I0819 11:25:09.770634   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:25:09.771019   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:25:09.804571   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:25:09.804690   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:25:09.824310   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:25:09.824399   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:25:09.839078   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:25:09.839155   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:25:09.852271   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:25:09.852352   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:25:09.866667   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:25:09.866739   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:25:09.879870   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:25:09.879940   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:25:09.891785   14738 logs.go:276] 0 containers: []
	W0819 11:25:09.891796   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:25:09.891852   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:25:09.903208   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:25:09.903226   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:25:09.903233   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:25:09.942722   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:25:09.942734   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:25:09.960976   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:25:09.960988   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:25:09.973619   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:25:09.973631   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:25:10.000199   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:25:10.000213   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:25:10.018880   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:25:10.018890   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:25:10.031336   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:25:10.031351   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:25:10.053447   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:25:10.053458   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:25:10.078504   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:25:10.078515   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:25:10.092074   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:25:10.092085   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:25:10.096502   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:25:10.096512   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:25:10.112043   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:25:10.112055   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:25:10.124266   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:25:10.124277   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:25:10.136696   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:25:10.136712   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:25:10.177702   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:25:10.177716   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:25:10.192914   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:25:10.192926   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:25:12.709887   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:25:12.693067   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:25:17.711491   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:25:17.711688   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:25:17.730191   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:25:17.730270   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:25:17.744492   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:25:17.744569   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:25:17.760389   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:25:17.760463   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:25:17.771699   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:25:17.771771   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:25:17.782772   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:25:17.782844   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:25:17.794062   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:25:17.794130   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:25:17.805844   14738 logs.go:276] 0 containers: []
	W0819 11:25:17.805857   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:25:17.805917   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:25:17.817841   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:25:17.817860   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:25:17.817865   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:25:17.833531   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:25:17.833547   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:25:17.846274   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:25:17.846283   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:25:17.860160   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:25:17.860174   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:25:17.882679   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:25:17.882691   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:25:17.922333   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:25:17.922355   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:25:17.927015   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:25:17.927023   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:25:17.939258   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:25:17.939268   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:25:17.961095   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:25:17.961108   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:25:17.979988   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:25:17.979998   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:25:17.992314   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:25:17.992325   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:25:18.006846   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:25:18.006855   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:25:18.041421   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:25:18.041442   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:25:18.056563   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:25:18.056579   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:25:18.068430   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:25:18.068442   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:25:18.092627   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:25:18.092641   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:25:17.695329   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:25:17.695597   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:25:17.720397   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:25:17.720502   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:25:17.736794   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:25:17.736874   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:25:17.750210   14497 logs.go:276] 4 containers: [b018f83efc45 31df3e5d6111 61d0ef3d0f03 c6b78cd6ea44]
	I0819 11:25:17.750288   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:25:17.761828   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:25:17.761880   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:25:17.773550   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:25:17.773612   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:25:17.785063   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:25:17.785133   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:25:17.796835   14497 logs.go:276] 0 containers: []
	W0819 11:25:17.796845   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:25:17.796895   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:25:17.808139   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:25:17.808157   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:25:17.808163   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:25:17.845604   14497 logs.go:123] Gathering logs for coredns [c6b78cd6ea44] ...
	I0819 11:25:17.845622   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6b78cd6ea44"
	I0819 11:25:17.869820   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:25:17.869832   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:25:17.884662   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:25:17.884671   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:25:17.905805   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:25:17.905816   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:25:17.917730   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:25:17.917740   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:25:17.943936   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:25:17.943951   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:25:17.983136   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:25:17.983146   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:25:18.005386   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:25:18.005397   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:25:18.019004   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:25:18.019015   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:25:18.023973   14497 logs.go:123] Gathering logs for coredns [b018f83efc45] ...
	I0819 11:25:18.023979   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b018f83efc45"
	I0819 11:25:18.037161   14497 logs.go:123] Gathering logs for coredns [31df3e5d6111] ...
	I0819 11:25:18.037174   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31df3e5d6111"
	I0819 11:25:18.050334   14497 logs.go:123] Gathering logs for coredns [61d0ef3d0f03] ...
	I0819 11:25:18.050346   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d0ef3d0f03"
	I0819 11:25:18.067172   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:25:18.067183   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:25:18.079718   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:25:18.079729   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:25:20.596839   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:25:20.632146   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:25:25.633829   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:25:25.633883   14738 kubeadm.go:597] duration metric: took 4m4.006733208s to restartPrimaryControlPlane
	W0819 11:25:25.633928   14738 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 11:25:25.633952   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0819 11:25:26.654970   14738 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.02100825s)
	I0819 11:25:26.655027   14738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:25:26.659936   14738 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 11:25:26.663007   14738 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 11:25:26.665642   14738 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 11:25:26.665647   14738 kubeadm.go:157] found existing configuration files:
	
	I0819 11:25:26.665671   14738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52396 /etc/kubernetes/admin.conf
	I0819 11:25:26.668189   14738 kubeadm.go:163] "https://control-plane.minikube.internal:52396" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52396 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 11:25:26.668209   14738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 11:25:26.670824   14738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52396 /etc/kubernetes/kubelet.conf
	I0819 11:25:26.673466   14738 kubeadm.go:163] "https://control-plane.minikube.internal:52396" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52396 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 11:25:26.673489   14738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 11:25:26.676611   14738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52396 /etc/kubernetes/controller-manager.conf
	I0819 11:25:26.679457   14738 kubeadm.go:163] "https://control-plane.minikube.internal:52396" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52396 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 11:25:26.679483   14738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 11:25:26.682092   14738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52396 /etc/kubernetes/scheduler.conf
	I0819 11:25:26.684927   14738 kubeadm.go:163] "https://control-plane.minikube.internal:52396" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52396 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 11:25:26.684949   14738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 11:25:26.688219   14738 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 11:25:26.704506   14738 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0819 11:25:26.704536   14738 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 11:25:26.753619   14738 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 11:25:26.753672   14738 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 11:25:26.753721   14738 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 11:25:26.802472   14738 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 11:25:25.599531   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:25:25.599797   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:25:25.627241   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:25:25.627347   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:25:25.645567   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:25:25.645648   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:25:25.660299   14497 logs.go:276] 4 containers: [b018f83efc45 31df3e5d6111 61d0ef3d0f03 c6b78cd6ea44]
	I0819 11:25:25.660373   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:25:25.672432   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:25:25.672521   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:25:25.684087   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:25:25.684155   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:25:25.696049   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:25:25.696116   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:25:25.714060   14497 logs.go:276] 0 containers: []
	W0819 11:25:25.714073   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:25:25.714133   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:25:25.725758   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:25:25.725778   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:25:25.725783   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:25:25.764882   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:25:25.764893   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:25:25.782451   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:25:25.782461   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:25:25.787323   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:25:25.787333   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:25:25.802765   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:25:25.802777   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:25:25.819267   14497 logs.go:123] Gathering logs for coredns [c6b78cd6ea44] ...
	I0819 11:25:25.819279   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6b78cd6ea44"
	I0819 11:25:25.832426   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:25:25.832439   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:25:25.844831   14497 logs.go:123] Gathering logs for coredns [61d0ef3d0f03] ...
	I0819 11:25:25.844844   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d0ef3d0f03"
	I0819 11:25:25.861497   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:25:25.861510   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:25:25.874246   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:25:25.874258   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:25:25.900144   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:25:25.900157   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:25:25.940987   14497 logs.go:123] Gathering logs for coredns [b018f83efc45] ...
	I0819 11:25:25.941004   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b018f83efc45"
	I0819 11:25:25.954273   14497 logs.go:123] Gathering logs for coredns [31df3e5d6111] ...
	I0819 11:25:25.954286   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31df3e5d6111"
	I0819 11:25:25.967883   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:25:25.967896   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:25:25.983305   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:25:25.983317   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:25:26.806723   14738 out.go:235]   - Generating certificates and keys ...
	I0819 11:25:26.806757   14738 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 11:25:26.806796   14738 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 11:25:26.806848   14738 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 11:25:26.806885   14738 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 11:25:26.806924   14738 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 11:25:26.806952   14738 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 11:25:26.806989   14738 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 11:25:26.807030   14738 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 11:25:26.807072   14738 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 11:25:26.807117   14738 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 11:25:26.807149   14738 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 11:25:26.807189   14738 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 11:25:27.098083   14738 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 11:25:27.226234   14738 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 11:25:27.349101   14738 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 11:25:27.627697   14738 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 11:25:27.657721   14738 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 11:25:27.658092   14738 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 11:25:27.658533   14738 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 11:25:27.725528   14738 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 11:25:27.729786   14738 out.go:235]   - Booting up control plane ...
	I0819 11:25:27.729946   14738 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 11:25:27.730040   14738 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 11:25:27.730086   14738 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 11:25:27.730166   14738 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 11:25:27.730316   14738 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 11:25:28.512023   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:25:32.231596   14738 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501672 seconds
	I0819 11:25:32.231674   14738 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 11:25:32.235210   14738 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 11:25:32.749852   14738 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 11:25:32.750098   14738 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-163000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 11:25:33.255535   14738 kubeadm.go:310] [bootstrap-token] Using token: jtd2ut.wv7l8fjgzdqcwvda
	I0819 11:25:33.258767   14738 out.go:235]   - Configuring RBAC rules ...
	I0819 11:25:33.258820   14738 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 11:25:33.258865   14738 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 11:25:33.263932   14738 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 11:25:33.265012   14738 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 11:25:33.266084   14738 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 11:25:33.267032   14738 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 11:25:33.270336   14738 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 11:25:33.430195   14738 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 11:25:33.660055   14738 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 11:25:33.660774   14738 kubeadm.go:310] 
	I0819 11:25:33.660868   14738 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 11:25:33.660880   14738 kubeadm.go:310] 
	I0819 11:25:33.661033   14738 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 11:25:33.661041   14738 kubeadm.go:310] 
	I0819 11:25:33.661054   14738 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 11:25:33.661087   14738 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 11:25:33.661115   14738 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 11:25:33.661120   14738 kubeadm.go:310] 
	I0819 11:25:33.661147   14738 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 11:25:33.661152   14738 kubeadm.go:310] 
	I0819 11:25:33.661188   14738 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 11:25:33.661195   14738 kubeadm.go:310] 
	I0819 11:25:33.661217   14738 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 11:25:33.661261   14738 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 11:25:33.661296   14738 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 11:25:33.661299   14738 kubeadm.go:310] 
	I0819 11:25:33.661339   14738 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 11:25:33.661378   14738 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 11:25:33.661385   14738 kubeadm.go:310] 
	I0819 11:25:33.661425   14738 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jtd2ut.wv7l8fjgzdqcwvda \
	I0819 11:25:33.661531   14738 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3f715a0124d50cfae4e4dfc474638f45f1ddd0476a0318801e6849c5425b2951 \
	I0819 11:25:33.661549   14738 kubeadm.go:310] 	--control-plane 
	I0819 11:25:33.661556   14738 kubeadm.go:310] 
	I0819 11:25:33.661623   14738 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 11:25:33.661631   14738 kubeadm.go:310] 
	I0819 11:25:33.661674   14738 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jtd2ut.wv7l8fjgzdqcwvda \
	I0819 11:25:33.661730   14738 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3f715a0124d50cfae4e4dfc474638f45f1ddd0476a0318801e6849c5425b2951 
	I0819 11:25:33.661791   14738 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 11:25:33.661798   14738 cni.go:84] Creating CNI manager for ""
	I0819 11:25:33.661805   14738 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:25:33.665654   14738 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 11:25:33.673645   14738 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 11:25:33.677046   14738 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 11:25:33.682528   14738 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 11:25:33.682603   14738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:25:33.682636   14738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-163000 minikube.k8s.io/updated_at=2024_08_19T11_25_33_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411 minikube.k8s.io/name=stopped-upgrade-163000 minikube.k8s.io/primary=true
	I0819 11:25:33.719300   14738 ops.go:34] apiserver oom_adj: -16
	I0819 11:25:33.719424   14738 kubeadm.go:1113] duration metric: took 36.892959ms to wait for elevateKubeSystemPrivileges
	I0819 11:25:33.747579   14738 kubeadm.go:394] duration metric: took 4m12.137773333s to StartCluster
	I0819 11:25:33.747600   14738 settings.go:142] acquiring lock: {Name:mk15c923e9a2cce6164c6c5cc70f47fd16c4c208 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:25:33.747691   14738 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:25:33.748117   14738 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19468-11838/kubeconfig: {Name:mkf06e67426049c2259f6e46b5143872117d8aeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:25:33.748429   14738 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:25:33.748462   14738 config.go:182] Loaded profile config "stopped-upgrade-163000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 11:25:33.748444   14738 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 11:25:33.748665   14738 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-163000"
	I0819 11:25:33.748673   14738 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-163000"
	I0819 11:25:33.748696   14738 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-163000"
	I0819 11:25:33.748700   14738 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-163000"
	W0819 11:25:33.748708   14738 addons.go:243] addon storage-provisioner should already be in state true
	I0819 11:25:33.748731   14738 host.go:66] Checking if "stopped-upgrade-163000" exists ...
	I0819 11:25:33.750932   14738 kapi.go:59] client config for stopped-upgrade-163000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000/client.key", CAFile:"/Users/jenkins/minikube-integration/19468-11838/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106043d10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 11:25:33.751122   14738 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-163000"
	W0819 11:25:33.751132   14738 addons.go:243] addon default-storageclass should already be in state true
	I0819 11:25:33.751147   14738 host.go:66] Checking if "stopped-upgrade-163000" exists ...
	I0819 11:25:33.753592   14738 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 11:25:33.753614   14738 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 11:25:33.753629   14738 sshutil.go:53] new ssh client: &{IP:localhost Port:52361 SSHKeyPath:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/stopped-upgrade-163000/id_rsa Username:docker}
	I0819 11:25:33.756669   14738 out.go:177] * Verifying Kubernetes components...
	I0819 11:25:33.760643   14738 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:25:33.514439   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:25:33.514559   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:25:33.527203   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:25:33.527288   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:25:33.539184   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:25:33.539269   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:25:33.552302   14497 logs.go:276] 4 containers: [b018f83efc45 31df3e5d6111 61d0ef3d0f03 c6b78cd6ea44]
	I0819 11:25:33.552395   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:25:33.563005   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:25:33.563076   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:25:33.573686   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:25:33.573774   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:25:33.585303   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:25:33.585383   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:25:33.597451   14497 logs.go:276] 0 containers: []
	W0819 11:25:33.597464   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:25:33.597546   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:25:33.608527   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:25:33.608551   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:25:33.608558   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:25:33.623114   14497 logs.go:123] Gathering logs for coredns [31df3e5d6111] ...
	I0819 11:25:33.623123   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31df3e5d6111"
	I0819 11:25:33.634897   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:25:33.634910   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:25:33.647118   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:25:33.647129   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:25:33.670667   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:25:33.670677   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:25:33.683717   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:25:33.683726   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:25:33.703715   14497 logs.go:123] Gathering logs for coredns [b018f83efc45] ...
	I0819 11:25:33.703730   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b018f83efc45"
	I0819 11:25:33.716494   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:25:33.716507   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:25:33.735636   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:25:33.735651   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:25:33.741107   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:25:33.741121   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:25:33.780293   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:25:33.780305   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:25:33.795783   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:25:33.795795   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:25:33.807991   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:25:33.808004   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:25:33.847496   14497 logs.go:123] Gathering logs for coredns [61d0ef3d0f03] ...
	I0819 11:25:33.847510   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d0ef3d0f03"
	I0819 11:25:33.861575   14497 logs.go:123] Gathering logs for coredns [c6b78cd6ea44] ...
	I0819 11:25:33.861589   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6b78cd6ea44"
	I0819 11:25:36.376532   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:25:33.764725   14738 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:25:33.767854   14738 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 11:25:33.767900   14738 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 11:25:33.767921   14738 sshutil.go:53] new ssh client: &{IP:localhost Port:52361 SSHKeyPath:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/stopped-upgrade-163000/id_rsa Username:docker}
	I0819 11:25:33.841595   14738 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 11:25:33.848717   14738 api_server.go:52] waiting for apiserver process to appear ...
	I0819 11:25:33.848780   14738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:25:33.852205   14738 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 11:25:33.853438   14738 api_server.go:72] duration metric: took 104.970958ms to wait for apiserver process to appear ...
	I0819 11:25:33.853447   14738 api_server.go:88] waiting for apiserver healthz status ...
	I0819 11:25:33.853455   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:25:33.896346   14738 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 11:25:34.230990   14738 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 11:25:34.231002   14738 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 11:25:41.378691   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:25:41.378811   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:25:41.395853   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:25:41.395950   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:25:41.406785   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:25:41.406851   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:25:41.417451   14497 logs.go:276] 4 containers: [b018f83efc45 31df3e5d6111 61d0ef3d0f03 c6b78cd6ea44]
	I0819 11:25:41.417520   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:25:41.428319   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:25:41.428392   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:25:41.441480   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:25:41.441547   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:25:41.452097   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:25:41.452166   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:25:41.462457   14497 logs.go:276] 0 containers: []
	W0819 11:25:41.462469   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:25:41.462523   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:25:41.473335   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:25:41.473352   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:25:41.473357   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:25:41.511338   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:25:41.511348   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:25:41.547917   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:25:41.547928   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:25:41.562159   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:25:41.562176   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:25:41.576891   14497 logs.go:123] Gathering logs for coredns [31df3e5d6111] ...
	I0819 11:25:41.576902   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31df3e5d6111"
	I0819 11:25:41.589284   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:25:41.589296   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:25:41.600733   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:25:41.600745   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:25:41.612650   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:25:41.612660   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:25:41.624387   14497 logs.go:123] Gathering logs for coredns [61d0ef3d0f03] ...
	I0819 11:25:41.624398   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d0ef3d0f03"
	I0819 11:25:41.640784   14497 logs.go:123] Gathering logs for coredns [c6b78cd6ea44] ...
	I0819 11:25:41.640795   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6b78cd6ea44"
	I0819 11:25:41.652980   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:25:41.652991   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:25:41.667363   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:25:41.667373   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:25:41.684667   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:25:41.684676   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:25:41.689382   14497 logs.go:123] Gathering logs for coredns [b018f83efc45] ...
	I0819 11:25:41.689392   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b018f83efc45"
	I0819 11:25:41.701293   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:25:41.701304   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:25:38.854000   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:25:38.854034   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:25:44.226492   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:25:43.854566   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:25:43.854590   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:25:49.228776   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:25:49.228943   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:25:49.239791   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:25:49.239869   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:25:49.250911   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:25:49.250981   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:25:49.261507   14497 logs.go:276] 4 containers: [b018f83efc45 31df3e5d6111 61d0ef3d0f03 c6b78cd6ea44]
	I0819 11:25:49.261578   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:25:49.272431   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:25:49.272496   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:25:49.282998   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:25:49.283059   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:25:49.293803   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:25:49.293871   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:25:49.305482   14497 logs.go:276] 0 containers: []
	W0819 11:25:49.305493   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:25:49.305556   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:25:49.316451   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:25:49.316468   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:25:49.316475   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:25:49.321175   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:25:49.321181   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:25:49.335368   14497 logs.go:123] Gathering logs for coredns [c6b78cd6ea44] ...
	I0819 11:25:49.335382   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6b78cd6ea44"
	I0819 11:25:49.347358   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:25:49.347372   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:25:49.359080   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:25:49.359093   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:25:49.376547   14497 logs.go:123] Gathering logs for coredns [b018f83efc45] ...
	I0819 11:25:49.376558   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b018f83efc45"
	I0819 11:25:49.388039   14497 logs.go:123] Gathering logs for coredns [31df3e5d6111] ...
	I0819 11:25:49.388049   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31df3e5d6111"
	I0819 11:25:49.400357   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:25:49.400369   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:25:49.414916   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:25:49.414928   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:25:49.428956   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:25:49.428968   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:25:49.440311   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:25:49.440325   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:25:49.463468   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:25:49.463476   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:25:49.475409   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:25:49.475422   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:25:49.512417   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:25:49.512426   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:25:49.547911   14497 logs.go:123] Gathering logs for coredns [61d0ef3d0f03] ...
	I0819 11:25:49.547924   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d0ef3d0f03"
	I0819 11:25:52.062550   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:25:48.855473   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:25:48.855501   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:25:57.064812   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:25:57.064975   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:25:57.080590   14497 logs.go:276] 1 containers: [590b6b5e4db3]
	I0819 11:25:57.080667   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:25:57.093334   14497 logs.go:276] 1 containers: [ff36620c6b25]
	I0819 11:25:57.093401   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:25:57.104123   14497 logs.go:276] 4 containers: [76f4f96e3d14 33316aef9534 b018f83efc45 31df3e5d6111]
	I0819 11:25:57.104196   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:25:57.117523   14497 logs.go:276] 1 containers: [065e037cd87a]
	I0819 11:25:57.117591   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:25:57.128449   14497 logs.go:276] 1 containers: [9939b5771ec5]
	I0819 11:25:57.128512   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:25:57.138842   14497 logs.go:276] 1 containers: [719f0363a08f]
	I0819 11:25:57.138919   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:25:57.150677   14497 logs.go:276] 0 containers: []
	W0819 11:25:57.150688   14497 logs.go:278] No container was found matching "kindnet"
	I0819 11:25:57.150743   14497 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:25:57.161554   14497 logs.go:276] 1 containers: [2de3eda01e88]
	I0819 11:25:57.161571   14497 logs.go:123] Gathering logs for coredns [76f4f96e3d14] ...
	I0819 11:25:57.161577   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f4f96e3d14"
	I0819 11:25:57.173131   14497 logs.go:123] Gathering logs for coredns [31df3e5d6111] ...
	I0819 11:25:57.173143   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31df3e5d6111"
	I0819 11:25:57.185279   14497 logs.go:123] Gathering logs for kube-scheduler [065e037cd87a] ...
	I0819 11:25:57.185291   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 065e037cd87a"
	I0819 11:25:53.856110   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:25:53.856139   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:25:57.200468   14497 logs.go:123] Gathering logs for container status ...
	I0819 11:25:57.200494   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:25:57.213607   14497 logs.go:123] Gathering logs for kubelet ...
	I0819 11:25:57.213617   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:25:57.252672   14497 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:25:57.252688   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:25:57.292289   14497 logs.go:123] Gathering logs for etcd [ff36620c6b25] ...
	I0819 11:25:57.292304   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff36620c6b25"
	I0819 11:25:57.306526   14497 logs.go:123] Gathering logs for kube-controller-manager [719f0363a08f] ...
	I0819 11:25:57.306541   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719f0363a08f"
	I0819 11:25:57.323892   14497 logs.go:123] Gathering logs for dmesg ...
	I0819 11:25:57.323902   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:25:57.328345   14497 logs.go:123] Gathering logs for kube-apiserver [590b6b5e4db3] ...
	I0819 11:25:57.328353   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 590b6b5e4db3"
	I0819 11:25:57.342901   14497 logs.go:123] Gathering logs for coredns [b018f83efc45] ...
	I0819 11:25:57.342912   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b018f83efc45"
	I0819 11:25:57.357281   14497 logs.go:123] Gathering logs for coredns [33316aef9534] ...
	I0819 11:25:57.357294   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33316aef9534"
	I0819 11:25:57.369036   14497 logs.go:123] Gathering logs for kube-proxy [9939b5771ec5] ...
	I0819 11:25:57.369050   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9939b5771ec5"
	I0819 11:25:57.381144   14497 logs.go:123] Gathering logs for storage-provisioner [2de3eda01e88] ...
	I0819 11:25:57.381156   14497 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de3eda01e88"
	I0819 11:25:57.393142   14497 logs.go:123] Gathering logs for Docker ...
	I0819 11:25:57.393154   14497 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:25:59.919589   14497 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:25:58.856529   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:25:58.856567   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:26:03.857341   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:26:03.857363   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0819 11:26:04.233095   14738 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0819 11:26:04.241288   14738 out.go:177] * Enabled addons: storage-provisioner
	I0819 11:26:04.921829   14497 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:26:04.926374   14497 out.go:201] 
	W0819 11:26:04.930419   14497 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0819 11:26:04.930429   14497 out.go:270] * 
	W0819 11:26:04.931100   14497 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:26:04.942321   14497 out.go:201] 
	I0819 11:26:04.247195   14738 addons.go:510] duration metric: took 30.498924s for enable addons: enabled=[storage-provisioner]
	I0819 11:26:08.858047   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:26:08.858095   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:26:13.859011   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:26:13.859037   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-08-19 18:17:04 UTC, ends at Mon 2024-08-19 18:26:20 UTC. --
	Aug 19 18:25:57 running-upgrade-015000 cri-dockerd[3041]: time="2024-08-19T18:25:57Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 19 18:26:02 running-upgrade-015000 cri-dockerd[3041]: time="2024-08-19T18:26:02Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 19 18:26:05 running-upgrade-015000 cri-dockerd[3041]: time="2024-08-19T18:26:05Z" level=error msg="ContainerStats resp: {0x40008cecc0 linux}"
	Aug 19 18:26:05 running-upgrade-015000 cri-dockerd[3041]: time="2024-08-19T18:26:05Z" level=error msg="ContainerStats resp: {0x40008ceec0 linux}"
	Aug 19 18:26:06 running-upgrade-015000 cri-dockerd[3041]: time="2024-08-19T18:26:06Z" level=error msg="ContainerStats resp: {0x40008a4c00 linux}"
	Aug 19 18:26:07 running-upgrade-015000 cri-dockerd[3041]: time="2024-08-19T18:26:07Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 19 18:26:07 running-upgrade-015000 cri-dockerd[3041]: time="2024-08-19T18:26:07Z" level=error msg="ContainerStats resp: {0x40008a5980 linux}"
	Aug 19 18:26:07 running-upgrade-015000 cri-dockerd[3041]: time="2024-08-19T18:26:07Z" level=error msg="ContainerStats resp: {0x40008a5dc0 linux}"
	Aug 19 18:26:07 running-upgrade-015000 cri-dockerd[3041]: time="2024-08-19T18:26:07Z" level=error msg="ContainerStats resp: {0x40006601c0 linux}"
	Aug 19 18:26:07 running-upgrade-015000 cri-dockerd[3041]: time="2024-08-19T18:26:07Z" level=error msg="ContainerStats resp: {0x4000660580 linux}"
	Aug 19 18:26:07 running-upgrade-015000 cri-dockerd[3041]: time="2024-08-19T18:26:07Z" level=error msg="ContainerStats resp: {0x40004da0c0 linux}"
	Aug 19 18:26:07 running-upgrade-015000 cri-dockerd[3041]: time="2024-08-19T18:26:07Z" level=error msg="ContainerStats resp: {0x40004da600 linux}"
	Aug 19 18:26:07 running-upgrade-015000 cri-dockerd[3041]: time="2024-08-19T18:26:07Z" level=error msg="ContainerStats resp: {0x40004da980 linux}"
	Aug 19 18:26:12 running-upgrade-015000 cri-dockerd[3041]: time="2024-08-19T18:26:12Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 19 18:26:17 running-upgrade-015000 cri-dockerd[3041]: time="2024-08-19T18:26:17Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 19 18:26:17 running-upgrade-015000 cri-dockerd[3041]: time="2024-08-19T18:26:17Z" level=error msg="ContainerStats resp: {0x40008cfe40 linux}"
	Aug 19 18:26:17 running-upgrade-015000 cri-dockerd[3041]: time="2024-08-19T18:26:17Z" level=error msg="ContainerStats resp: {0x40007cea80 linux}"
	Aug 19 18:26:18 running-upgrade-015000 cri-dockerd[3041]: time="2024-08-19T18:26:18Z" level=error msg="ContainerStats resp: {0x40004da980 linux}"
	Aug 19 18:26:19 running-upgrade-015000 cri-dockerd[3041]: time="2024-08-19T18:26:19Z" level=error msg="ContainerStats resp: {0x40004db9c0 linux}"
	Aug 19 18:26:19 running-upgrade-015000 cri-dockerd[3041]: time="2024-08-19T18:26:19Z" level=error msg="ContainerStats resp: {0x40004dbcc0 linux}"
	Aug 19 18:26:19 running-upgrade-015000 cri-dockerd[3041]: time="2024-08-19T18:26:19Z" level=error msg="ContainerStats resp: {0x40004dbe80 linux}"
	Aug 19 18:26:19 running-upgrade-015000 cri-dockerd[3041]: time="2024-08-19T18:26:19Z" level=error msg="ContainerStats resp: {0x4000356d40 linux}"
	Aug 19 18:26:19 running-upgrade-015000 cri-dockerd[3041]: time="2024-08-19T18:26:19Z" level=error msg="ContainerStats resp: {0x4000357640 linux}"
	Aug 19 18:26:19 running-upgrade-015000 cri-dockerd[3041]: time="2024-08-19T18:26:19Z" level=error msg="ContainerStats resp: {0x40006607c0 linux}"
	Aug 19 18:26:19 running-upgrade-015000 cri-dockerd[3041]: time="2024-08-19T18:26:19Z" level=error msg="ContainerStats resp: {0x4000660900 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	76f4f96e3d147       edaa71f2aee88       26 seconds ago      Running             coredns                   2                   78144e31ebba4
	33316aef95346       edaa71f2aee88       27 seconds ago      Running             coredns                   2                   1b45898e36ea2
	b018f83efc450       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   1b45898e36ea2
	31df3e5d6111f       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   78144e31ebba4
	2de3eda01e88b       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   3e65b5b39490b
	9939b5771ec57       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   c6978675b38c9
	065e037cd87a3       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   d10d8ec3a4e1c
	590b6b5e4db3c       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   81f4b5bf21d92
	ff36620c6b250       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   d86d2414f674e
	719f0363a08f4       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   3d7131e93db89
	
	
	==> coredns [31df3e5d6111] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1447933176062385844.4032588388747458295. HINFO: read udp 10.244.0.3:50229->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1447933176062385844.4032588388747458295. HINFO: read udp 10.244.0.3:58102->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1447933176062385844.4032588388747458295. HINFO: read udp 10.244.0.3:49840->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1447933176062385844.4032588388747458295. HINFO: read udp 10.244.0.3:54808->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1447933176062385844.4032588388747458295. HINFO: read udp 10.244.0.3:49476->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1447933176062385844.4032588388747458295. HINFO: read udp 10.244.0.3:55625->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1447933176062385844.4032588388747458295. HINFO: read udp 10.244.0.3:41657->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1447933176062385844.4032588388747458295. HINFO: read udp 10.244.0.3:46148->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1447933176062385844.4032588388747458295. HINFO: read udp 10.244.0.3:37029->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1447933176062385844.4032588388747458295. HINFO: read udp 10.244.0.3:48204->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [33316aef9534] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2138406510328694539.1697715845834200660. HINFO: read udp 10.244.0.2:40624->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2138406510328694539.1697715845834200660. HINFO: read udp 10.244.0.2:48112->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2138406510328694539.1697715845834200660. HINFO: read udp 10.244.0.2:33649->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2138406510328694539.1697715845834200660. HINFO: read udp 10.244.0.2:56180->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2138406510328694539.1697715845834200660. HINFO: read udp 10.244.0.2:60243->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2138406510328694539.1697715845834200660. HINFO: read udp 10.244.0.2:48623->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2138406510328694539.1697715845834200660. HINFO: read udp 10.244.0.2:50657->10.0.2.3:53: i/o timeout
	
	
	==> coredns [76f4f96e3d14] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 784177405870849500.5191159786480892123. HINFO: read udp 10.244.0.3:34388->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 784177405870849500.5191159786480892123. HINFO: read udp 10.244.0.3:40173->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 784177405870849500.5191159786480892123. HINFO: read udp 10.244.0.3:46352->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 784177405870849500.5191159786480892123. HINFO: read udp 10.244.0.3:45753->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 784177405870849500.5191159786480892123. HINFO: read udp 10.244.0.3:51990->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 784177405870849500.5191159786480892123. HINFO: read udp 10.244.0.3:51134->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 784177405870849500.5191159786480892123. HINFO: read udp 10.244.0.3:36155->10.0.2.3:53: i/o timeout
	
	
	==> coredns [b018f83efc45] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3872478347411776736.6783990971657897099. HINFO: read udp 10.244.0.2:35629->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3872478347411776736.6783990971657897099. HINFO: read udp 10.244.0.2:55585->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3872478347411776736.6783990971657897099. HINFO: read udp 10.244.0.2:48949->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3872478347411776736.6783990971657897099. HINFO: read udp 10.244.0.2:59140->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3872478347411776736.6783990971657897099. HINFO: read udp 10.244.0.2:44191->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3872478347411776736.6783990971657897099. HINFO: read udp 10.244.0.2:54329->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3872478347411776736.6783990971657897099. HINFO: read udp 10.244.0.2:35614->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3872478347411776736.6783990971657897099. HINFO: read udp 10.244.0.2:60762->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3872478347411776736.6783990971657897099. HINFO: read udp 10.244.0.2:55835->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3872478347411776736.6783990971657897099. HINFO: read udp 10.244.0.2:38729->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-015000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-015000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411
	                    minikube.k8s.io/name=running-upgrade-015000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T11_22_04_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 18:22:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-015000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 18:26:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 18:22:03 +0000   Mon, 19 Aug 2024 18:21:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 18:22:03 +0000   Mon, 19 Aug 2024 18:21:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 18:22:03 +0000   Mon, 19 Aug 2024 18:21:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 18:22:03 +0000   Mon, 19 Aug 2024 18:22:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-015000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 6cff3f796d7d4746a9212de7aa23029f
	  System UUID:                6cff3f796d7d4746a9212de7aa23029f
	  Boot ID:                    15fa51c6-2fc1-4f5c-b92f-365dc0bd1da5
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-67xzq                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 coredns-6d4b75cb6d-mc967                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 etcd-running-upgrade-015000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m19s
	  kube-system                 kube-apiserver-running-upgrade-015000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-controller-manager-running-upgrade-015000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-proxy-bxwl2                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 kube-scheduler-running-upgrade-015000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m3s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m23s (x5 over 4m23s)  kubelet          Node running-upgrade-015000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-015000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-015000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m18s                  kubelet          Node running-upgrade-015000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  4m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    4m18s                  kubelet          Node running-upgrade-015000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m18s                  kubelet          Node running-upgrade-015000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m18s                  kubelet          Node running-upgrade-015000 status is now: NodeReady
	  Normal  Starting                 4m18s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m5s                   node-controller  Node running-upgrade-015000 event: Registered Node running-upgrade-015000 in Controller
	
	
	==> dmesg <==
	[  +1.664294] systemd-fstab-generator[833]: Ignoring "noauto" for root device
	[  +0.076810] systemd-fstab-generator[844]: Ignoring "noauto" for root device
	[  +0.076396] systemd-fstab-generator[855]: Ignoring "noauto" for root device
	[  +1.132507] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.095316] systemd-fstab-generator[1007]: Ignoring "noauto" for root device
	[  +0.075220] systemd-fstab-generator[1018]: Ignoring "noauto" for root device
	[  +2.931914] systemd-fstab-generator[1291]: Ignoring "noauto" for root device
	[  +8.650072] systemd-fstab-generator[1842]: Ignoring "noauto" for root device
	[  +2.643349] systemd-fstab-generator[2202]: Ignoring "noauto" for root device
	[  +0.189192] systemd-fstab-generator[2240]: Ignoring "noauto" for root device
	[  +0.088730] systemd-fstab-generator[2251]: Ignoring "noauto" for root device
	[  +0.096291] systemd-fstab-generator[2264]: Ignoring "noauto" for root device
	[ +12.533157] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.225316] systemd-fstab-generator[2996]: Ignoring "noauto" for root device
	[  +0.081679] systemd-fstab-generator[3009]: Ignoring "noauto" for root device
	[  +0.081471] systemd-fstab-generator[3020]: Ignoring "noauto" for root device
	[  +0.093747] systemd-fstab-generator[3034]: Ignoring "noauto" for root device
	[  +2.313991] systemd-fstab-generator[3186]: Ignoring "noauto" for root device
	[  +3.082906] systemd-fstab-generator[3578]: Ignoring "noauto" for root device
	[  +1.317338] systemd-fstab-generator[3838]: Ignoring "noauto" for root device
	[Aug19 18:18] kauditd_printk_skb: 68 callbacks suppressed
	[Aug19 18:21] kauditd_printk_skb: 25 callbacks suppressed
	[  +1.400905] systemd-fstab-generator[11893]: Ignoring "noauto" for root device
	[Aug19 18:22] systemd-fstab-generator[12485]: Ignoring "noauto" for root device
	[  +0.464229] systemd-fstab-generator[12617]: Ignoring "noauto" for root device
	
	
	==> etcd [ff36620c6b25] <==
	{"level":"info","ts":"2024-08-19T18:21:59.335Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-08-19T18:21:59.341Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-08-19T18:21:59.343Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-19T18:21:59.343Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-19T18:21:59.343Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-19T18:21:59.343Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-19T18:21:59.343Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-19T18:21:59.779Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-19T18:21:59.779Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-19T18:21:59.780Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-08-19T18:21:59.780Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-08-19T18:21:59.780Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-19T18:21:59.780Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-08-19T18:21:59.780Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-19T18:21:59.780Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T18:21:59.781Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T18:21:59.781Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T18:21:59.781Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T18:21:59.781Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-015000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T18:21:59.781Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T18:21:59.781Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-08-19T18:21:59.781Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T18:21:59.782Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T18:21:59.782Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T18:21:59.782Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 18:26:21 up 9 min,  0 users,  load average: 0.23, 0.27, 0.18
	Linux running-upgrade-015000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [590b6b5e4db3] <==
	I0819 18:22:01.102184       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0819 18:22:01.102267       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0819 18:22:01.112236       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 18:22:01.112256       1 cache.go:39] Caches are synced for autoregister controller
	I0819 18:22:01.112437       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0819 18:22:01.149096       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0819 18:22:01.156135       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0819 18:22:01.852756       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0819 18:22:02.015581       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0819 18:22:02.017616       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0819 18:22:02.018174       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0819 18:22:02.132645       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0819 18:22:02.146085       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0819 18:22:02.186728       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0819 18:22:02.188772       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0819 18:22:02.189127       1 controller.go:611] quota admission added evaluator for: endpoints
	I0819 18:22:02.190286       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0819 18:22:03.159721       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0819 18:22:03.793091       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0819 18:22:03.796510       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0819 18:22:03.803211       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0819 18:22:03.841434       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 18:22:16.640420       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0819 18:22:16.739938       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0819 18:22:17.196835       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [719f0363a08f] <==
	I0819 18:22:16.037664       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0819 18:22:16.037733       1 shared_informer.go:262] Caches are synced for GC
	I0819 18:22:16.038884       1 shared_informer.go:262] Caches are synced for ephemeral
	I0819 18:22:16.038908       1 shared_informer.go:262] Caches are synced for taint
	I0819 18:22:16.038966       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0819 18:22:16.039020       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-015000. Assuming now as a timestamp.
	I0819 18:22:16.039044       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0819 18:22:16.039066       1 event.go:294] "Event occurred" object="running-upgrade-015000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-015000 event: Registered Node running-upgrade-015000 in Controller"
	I0819 18:22:16.039098       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0819 18:22:16.039673       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0819 18:22:16.193021       1 shared_informer.go:262] Caches are synced for resource quota
	I0819 18:22:16.195140       1 shared_informer.go:262] Caches are synced for endpoint
	I0819 18:22:16.214532       1 shared_informer.go:262] Caches are synced for stateful set
	I0819 18:22:16.238471       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0819 18:22:16.238535       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0819 18:22:16.240760       1 shared_informer.go:262] Caches are synced for resource quota
	I0819 18:22:16.281660       1 shared_informer.go:262] Caches are synced for disruption
	I0819 18:22:16.281678       1 disruption.go:371] Sending events to api server.
	I0819 18:22:16.644879       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-bxwl2"
	I0819 18:22:16.657132       1 shared_informer.go:262] Caches are synced for garbage collector
	I0819 18:22:16.688162       1 shared_informer.go:262] Caches are synced for garbage collector
	I0819 18:22:16.688197       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0819 18:22:16.741170       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0819 18:22:17.043526       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-mc967"
	I0819 18:22:17.047288       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-67xzq"
	
	
	==> kube-proxy [9939b5771ec5] <==
	I0819 18:22:17.170828       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0819 18:22:17.170879       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0819 18:22:17.170892       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0819 18:22:17.193800       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0819 18:22:17.193814       1 server_others.go:206] "Using iptables Proxier"
	I0819 18:22:17.193827       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0819 18:22:17.193964       1 server.go:661] "Version info" version="v1.24.1"
	I0819 18:22:17.193972       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 18:22:17.194344       1 config.go:317] "Starting service config controller"
	I0819 18:22:17.194355       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0819 18:22:17.194363       1 config.go:226] "Starting endpoint slice config controller"
	I0819 18:22:17.194365       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0819 18:22:17.194648       1 config.go:444] "Starting node config controller"
	I0819 18:22:17.194651       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0819 18:22:17.294850       1 shared_informer.go:262] Caches are synced for node config
	I0819 18:22:17.294874       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0819 18:22:17.294888       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [065e037cd87a] <==
	W0819 18:22:01.087069       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 18:22:01.087073       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0819 18:22:01.087085       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 18:22:01.087088       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0819 18:22:01.087110       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0819 18:22:01.087117       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0819 18:22:01.087194       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 18:22:01.087201       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0819 18:22:01.087238       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 18:22:01.087241       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0819 18:22:01.087254       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 18:22:01.087257       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0819 18:22:01.087268       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 18:22:01.087271       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0819 18:22:01.087281       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 18:22:01.087284       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0819 18:22:01.087295       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 18:22:01.087299       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0819 18:22:01.087362       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 18:22:01.087373       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0819 18:22:01.087390       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 18:22:01.087395       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0819 18:22:01.987932       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 18:22:01.987962       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0819 18:22:02.585439       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-08-19 18:17:04 UTC, ends at Mon 2024-08-19 18:26:21 UTC. --
	Aug 19 18:22:05 running-upgrade-015000 kubelet[12491]: E0819 18:22:05.624182   12491 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-015000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-015000"
	Aug 19 18:22:05 running-upgrade-015000 kubelet[12491]: E0819 18:22:05.824652   12491 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-015000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-015000"
	Aug 19 18:22:06 running-upgrade-015000 kubelet[12491]: I0819 18:22:06.022227   12491 request.go:601] Waited for 1.121334539s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Aug 19 18:22:06 running-upgrade-015000 kubelet[12491]: E0819 18:22:06.028129   12491 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-015000\" already exists" pod="kube-system/etcd-running-upgrade-015000"
	Aug 19 18:22:16 running-upgrade-015000 kubelet[12491]: I0819 18:22:16.044456   12491 topology_manager.go:200] "Topology Admit Handler"
	Aug 19 18:22:16 running-upgrade-015000 kubelet[12491]: I0819 18:22:16.065203   12491 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 19 18:22:16 running-upgrade-015000 kubelet[12491]: I0819 18:22:16.065327   12491 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e6cedde5-e0b0-485c-8ba6-c66a1ecdeedb-tmp\") pod \"storage-provisioner\" (UID: \"e6cedde5-e0b0-485c-8ba6-c66a1ecdeedb\") " pod="kube-system/storage-provisioner"
	Aug 19 18:22:16 running-upgrade-015000 kubelet[12491]: I0819 18:22:16.065344   12491 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f759m\" (UniqueName: \"kubernetes.io/projected/e6cedde5-e0b0-485c-8ba6-c66a1ecdeedb-kube-api-access-f759m\") pod \"storage-provisioner\" (UID: \"e6cedde5-e0b0-485c-8ba6-c66a1ecdeedb\") " pod="kube-system/storage-provisioner"
	Aug 19 18:22:16 running-upgrade-015000 kubelet[12491]: I0819 18:22:16.065564   12491 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 19 18:22:16 running-upgrade-015000 kubelet[12491]: E0819 18:22:16.169300   12491 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Aug 19 18:22:16 running-upgrade-015000 kubelet[12491]: E0819 18:22:16.169352   12491 projected.go:192] Error preparing data for projected volume kube-api-access-f759m for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Aug 19 18:22:16 running-upgrade-015000 kubelet[12491]: E0819 18:22:16.169392   12491 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/e6cedde5-e0b0-485c-8ba6-c66a1ecdeedb-kube-api-access-f759m podName:e6cedde5-e0b0-485c-8ba6-c66a1ecdeedb nodeName:}" failed. No retries permitted until 2024-08-19 18:22:16.669380183 +0000 UTC m=+12.894226091 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-f759m" (UniqueName: "kubernetes.io/projected/e6cedde5-e0b0-485c-8ba6-c66a1ecdeedb-kube-api-access-f759m") pod "storage-provisioner" (UID: "e6cedde5-e0b0-485c-8ba6-c66a1ecdeedb") : configmap "kube-root-ca.crt" not found
	Aug 19 18:22:16 running-upgrade-015000 kubelet[12491]: I0819 18:22:16.648632   12491 topology_manager.go:200] "Topology Admit Handler"
	Aug 19 18:22:16 running-upgrade-015000 kubelet[12491]: I0819 18:22:16.769079   12491 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d38d1111-ac8c-45de-b1bf-eb6e6598d86b-kube-proxy\") pod \"kube-proxy-bxwl2\" (UID: \"d38d1111-ac8c-45de-b1bf-eb6e6598d86b\") " pod="kube-system/kube-proxy-bxwl2"
	Aug 19 18:22:16 running-upgrade-015000 kubelet[12491]: I0819 18:22:16.769168   12491 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d38d1111-ac8c-45de-b1bf-eb6e6598d86b-xtables-lock\") pod \"kube-proxy-bxwl2\" (UID: \"d38d1111-ac8c-45de-b1bf-eb6e6598d86b\") " pod="kube-system/kube-proxy-bxwl2"
	Aug 19 18:22:16 running-upgrade-015000 kubelet[12491]: I0819 18:22:16.769195   12491 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqbrl\" (UniqueName: \"kubernetes.io/projected/d38d1111-ac8c-45de-b1bf-eb6e6598d86b-kube-api-access-hqbrl\") pod \"kube-proxy-bxwl2\" (UID: \"d38d1111-ac8c-45de-b1bf-eb6e6598d86b\") " pod="kube-system/kube-proxy-bxwl2"
	Aug 19 18:22:16 running-upgrade-015000 kubelet[12491]: I0819 18:22:16.769218   12491 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d38d1111-ac8c-45de-b1bf-eb6e6598d86b-lib-modules\") pod \"kube-proxy-bxwl2\" (UID: \"d38d1111-ac8c-45de-b1bf-eb6e6598d86b\") " pod="kube-system/kube-proxy-bxwl2"
	Aug 19 18:22:17 running-upgrade-015000 kubelet[12491]: I0819 18:22:17.047473   12491 topology_manager.go:200] "Topology Admit Handler"
	Aug 19 18:22:17 running-upgrade-015000 kubelet[12491]: I0819 18:22:17.056617   12491 topology_manager.go:200] "Topology Admit Handler"
	Aug 19 18:22:17 running-upgrade-015000 kubelet[12491]: I0819 18:22:17.073013   12491 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5128d444-23d2-4081-be22-ac31fbe47300-config-volume\") pod \"coredns-6d4b75cb6d-mc967\" (UID: \"5128d444-23d2-4081-be22-ac31fbe47300\") " pod="kube-system/coredns-6d4b75cb6d-mc967"
	Aug 19 18:22:17 running-upgrade-015000 kubelet[12491]: I0819 18:22:17.073044   12491 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8f12add-f297-4391-a86d-52cc68db062f-config-volume\") pod \"coredns-6d4b75cb6d-67xzq\" (UID: \"b8f12add-f297-4391-a86d-52cc68db062f\") " pod="kube-system/coredns-6d4b75cb6d-67xzq"
	Aug 19 18:22:17 running-upgrade-015000 kubelet[12491]: I0819 18:22:17.073056   12491 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg2s8\" (UniqueName: \"kubernetes.io/projected/b8f12add-f297-4391-a86d-52cc68db062f-kube-api-access-pg2s8\") pod \"coredns-6d4b75cb6d-67xzq\" (UID: \"b8f12add-f297-4391-a86d-52cc68db062f\") " pod="kube-system/coredns-6d4b75cb6d-67xzq"
	Aug 19 18:22:17 running-upgrade-015000 kubelet[12491]: I0819 18:22:17.073066   12491 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnndg\" (UniqueName: \"kubernetes.io/projected/5128d444-23d2-4081-be22-ac31fbe47300-kube-api-access-xnndg\") pod \"coredns-6d4b75cb6d-mc967\" (UID: \"5128d444-23d2-4081-be22-ac31fbe47300\") " pod="kube-system/coredns-6d4b75cb6d-mc967"
	Aug 19 18:25:55 running-upgrade-015000 kubelet[12491]: I0819 18:25:55.346381   12491 scope.go:110] "RemoveContainer" containerID="c6b78cd6ea4412deda417e739f4dccd8e21a91c65fcde88a00db4aba8e21c188"
	Aug 19 18:25:55 running-upgrade-015000 kubelet[12491]: I0819 18:25:55.359263   12491 scope.go:110] "RemoveContainer" containerID="61d0ef3d0f03d55a255d071fa83e7e8b91914b7874b109edfc61b4e7b3ca3e25"
	
	
	==> storage-provisioner [2de3eda01e88] <==
	I0819 18:22:17.216232       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 18:22:17.220715       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 18:22:17.220736       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 18:22:17.224051       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 18:22:17.224302       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-015000_a0fbc6d3-a906-41eb-a9a9-8d7e21b593db!
	I0819 18:22:17.224254       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"45a404b9-7788-4050-ac25-de32a4ba6d56", APIVersion:"v1", ResourceVersion:"368", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-015000_a0fbc6d3-a906-41eb-a9a9-8d7e21b593db became leader
	I0819 18:22:17.325086       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-015000_a0fbc6d3-a906-41eb-a9a9-8d7e21b593db!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-015000 -n running-upgrade-015000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-015000 -n running-upgrade-015000: exit status 2 (15.6778775s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-015000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-015000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-015000
--- FAIL: TestRunningBinaryUpgrade (598.05s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.55s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-611000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-611000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.994045167s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-611000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-611000" primary control-plane node in "kubernetes-upgrade-611000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-611000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:19:39.435391   14632 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:19:39.435519   14632 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:19:39.435522   14632 out.go:358] Setting ErrFile to fd 2...
	I0819 11:19:39.435528   14632 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:19:39.435654   14632 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:19:39.436775   14632 out.go:352] Setting JSON to false
	I0819 11:19:39.453400   14632 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6546,"bootTime":1724085033,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:19:39.453466   14632 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:19:39.458750   14632 out.go:177] * [kubernetes-upgrade-611000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:19:39.465647   14632 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 11:19:39.465717   14632 notify.go:220] Checking for updates...
	I0819 11:19:39.471573   14632 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:19:39.474605   14632 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:19:39.477555   14632 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:19:39.480531   14632 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	I0819 11:19:39.483611   14632 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:19:39.486868   14632 config.go:182] Loaded profile config "multinode-540000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:19:39.486933   14632 config.go:182] Loaded profile config "running-upgrade-015000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 11:19:39.486984   14632 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:19:39.491587   14632 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:19:39.498463   14632 start.go:297] selected driver: qemu2
	I0819 11:19:39.498471   14632 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:19:39.498477   14632 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:19:39.500839   14632 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:19:39.503526   14632 out.go:177] * Automatically selected the socket_vmnet network
	I0819 11:19:39.506632   14632 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 11:19:39.506648   14632 cni.go:84] Creating CNI manager for ""
	I0819 11:19:39.506654   14632 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0819 11:19:39.506696   14632 start.go:340] cluster config:
	{Name:kubernetes-upgrade-611000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-611000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:19:39.510354   14632 iso.go:125] acquiring lock: {Name:mk1182fa87ba49f1e009b3ded77c456c9e9e8e4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:19:39.517545   14632 out.go:177] * Starting "kubernetes-upgrade-611000" primary control-plane node in "kubernetes-upgrade-611000" cluster
	I0819 11:19:39.521554   14632 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0819 11:19:39.521572   14632 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0819 11:19:39.521581   14632 cache.go:56] Caching tarball of preloaded images
	I0819 11:19:39.521675   14632 preload.go:172] Found /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:19:39.521682   14632 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0819 11:19:39.521760   14632 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/kubernetes-upgrade-611000/config.json ...
	I0819 11:19:39.521771   14632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/kubernetes-upgrade-611000/config.json: {Name:mk8609806e9c25608d8a7c1f2fc5b253484cde93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:19:39.522093   14632 start.go:360] acquireMachinesLock for kubernetes-upgrade-611000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:19:39.522125   14632 start.go:364] duration metric: took 26.25µs to acquireMachinesLock for "kubernetes-upgrade-611000"
	I0819 11:19:39.522137   14632 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-611000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-611000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:19:39.522163   14632 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:19:39.529530   14632 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 11:19:39.547407   14632 start.go:159] libmachine.API.Create for "kubernetes-upgrade-611000" (driver="qemu2")
	I0819 11:19:39.547442   14632 client.go:168] LocalClient.Create starting
	I0819 11:19:39.547504   14632 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:19:39.547534   14632 main.go:141] libmachine: Decoding PEM data...
	I0819 11:19:39.547546   14632 main.go:141] libmachine: Parsing certificate...
	I0819 11:19:39.547590   14632 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:19:39.547613   14632 main.go:141] libmachine: Decoding PEM data...
	I0819 11:19:39.547621   14632 main.go:141] libmachine: Parsing certificate...
	I0819 11:19:39.547981   14632 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:19:39.741355   14632 main.go:141] libmachine: Creating SSH key...
	I0819 11:19:39.947088   14632 main.go:141] libmachine: Creating Disk image...
	I0819 11:19:39.947096   14632 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:19:39.947355   14632 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kubernetes-upgrade-611000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kubernetes-upgrade-611000/disk.qcow2
	I0819 11:19:39.957260   14632 main.go:141] libmachine: STDOUT: 
	I0819 11:19:39.957279   14632 main.go:141] libmachine: STDERR: 
	I0819 11:19:39.957325   14632 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kubernetes-upgrade-611000/disk.qcow2 +20000M
	I0819 11:19:39.965272   14632 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:19:39.965290   14632 main.go:141] libmachine: STDERR: 
	I0819 11:19:39.965304   14632 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kubernetes-upgrade-611000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kubernetes-upgrade-611000/disk.qcow2
	I0819 11:19:39.965309   14632 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:19:39.965326   14632 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:19:39.965347   14632 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kubernetes-upgrade-611000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kubernetes-upgrade-611000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kubernetes-upgrade-611000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:2b:0b:bd:67:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kubernetes-upgrade-611000/disk.qcow2
	I0819 11:19:39.966895   14632 main.go:141] libmachine: STDOUT: 
	I0819 11:19:39.966910   14632 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:19:39.966927   14632 client.go:171] duration metric: took 419.481333ms to LocalClient.Create
	I0819 11:19:41.968936   14632 start.go:128] duration metric: took 2.44677925s to createHost
	I0819 11:19:41.968952   14632 start.go:83] releasing machines lock for "kubernetes-upgrade-611000", held for 2.446835334s
	W0819 11:19:41.968975   14632 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:19:41.979391   14632 out.go:177] * Deleting "kubernetes-upgrade-611000" in qemu2 ...
	W0819 11:19:41.990203   14632 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:19:41.990216   14632 start.go:729] Will try again in 5 seconds ...
	I0819 11:19:46.992514   14632 start.go:360] acquireMachinesLock for kubernetes-upgrade-611000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:19:46.993064   14632 start.go:364] duration metric: took 430.041µs to acquireMachinesLock for "kubernetes-upgrade-611000"
	I0819 11:19:46.993149   14632 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-611000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-611000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:19:46.993435   14632 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:19:46.999054   14632 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 11:19:47.047551   14632 start.go:159] libmachine.API.Create for "kubernetes-upgrade-611000" (driver="qemu2")
	I0819 11:19:47.047598   14632 client.go:168] LocalClient.Create starting
	I0819 11:19:47.047718   14632 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:19:47.047791   14632 main.go:141] libmachine: Decoding PEM data...
	I0819 11:19:47.047811   14632 main.go:141] libmachine: Parsing certificate...
	I0819 11:19:47.047873   14632 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:19:47.047920   14632 main.go:141] libmachine: Decoding PEM data...
	I0819 11:19:47.047934   14632 main.go:141] libmachine: Parsing certificate...
	I0819 11:19:47.048571   14632 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:19:47.210039   14632 main.go:141] libmachine: Creating SSH key...
	I0819 11:19:47.338208   14632 main.go:141] libmachine: Creating Disk image...
	I0819 11:19:47.338216   14632 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:19:47.338441   14632 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kubernetes-upgrade-611000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kubernetes-upgrade-611000/disk.qcow2
	I0819 11:19:47.348040   14632 main.go:141] libmachine: STDOUT: 
	I0819 11:19:47.348057   14632 main.go:141] libmachine: STDERR: 
	I0819 11:19:47.348100   14632 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kubernetes-upgrade-611000/disk.qcow2 +20000M
	I0819 11:19:47.356078   14632 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:19:47.356096   14632 main.go:141] libmachine: STDERR: 
	I0819 11:19:47.356107   14632 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kubernetes-upgrade-611000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kubernetes-upgrade-611000/disk.qcow2
	I0819 11:19:47.356120   14632 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:19:47.356130   14632 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:19:47.356170   14632 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kubernetes-upgrade-611000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kubernetes-upgrade-611000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kubernetes-upgrade-611000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:b8:fa:53:cd:b9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kubernetes-upgrade-611000/disk.qcow2
	I0819 11:19:47.357850   14632 main.go:141] libmachine: STDOUT: 
	I0819 11:19:47.357865   14632 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:19:47.357878   14632 client.go:171] duration metric: took 310.276917ms to LocalClient.Create
	I0819 11:19:49.360069   14632 start.go:128] duration metric: took 2.366606584s to createHost
	I0819 11:19:49.360139   14632 start.go:83] releasing machines lock for "kubernetes-upgrade-611000", held for 2.367063s
	W0819 11:19:49.360463   14632 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-611000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-611000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:19:49.373211   14632 out.go:201] 
	W0819 11:19:49.378143   14632 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:19:49.378170   14632 out.go:270] * 
	* 
	W0819 11:19:49.385903   14632 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:19:49.390223   14632 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-611000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-611000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-611000: (3.156081208s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-611000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-611000 status --format={{.Host}}: exit status 7 (67.691083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-611000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-611000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.181981834s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-611000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-611000" primary control-plane node in "kubernetes-upgrade-611000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-611000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-611000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:19:52.661809   14672 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:19:52.661990   14672 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:19:52.661994   14672 out.go:358] Setting ErrFile to fd 2...
	I0819 11:19:52.661996   14672 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:19:52.662153   14672 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:19:52.663716   14672 out.go:352] Setting JSON to false
	I0819 11:19:52.683447   14672 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6559,"bootTime":1724085033,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:19:52.683554   14672 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:19:52.688291   14672 out.go:177] * [kubernetes-upgrade-611000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:19:52.695244   14672 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 11:19:52.695280   14672 notify.go:220] Checking for updates...
	I0819 11:19:52.702256   14672 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:19:52.705180   14672 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:19:52.708250   14672 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:19:52.711255   14672 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	I0819 11:19:52.714153   14672 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:19:52.717462   14672 config.go:182] Loaded profile config "kubernetes-upgrade-611000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0819 11:19:52.717720   14672 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:19:52.722225   14672 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 11:19:52.729253   14672 start.go:297] selected driver: qemu2
	I0819 11:19:52.729259   14672 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-611000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-611000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:19:52.729307   14672 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:19:52.731623   14672 cni.go:84] Creating CNI manager for ""
	I0819 11:19:52.731641   14672 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:19:52.731665   14672 start.go:340] cluster config:
	{Name:kubernetes-upgrade-611000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-611000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:19:52.735028   14672 iso.go:125] acquiring lock: {Name:mk1182fa87ba49f1e009b3ded77c456c9e9e8e4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:19:52.742182   14672 out.go:177] * Starting "kubernetes-upgrade-611000" primary control-plane node in "kubernetes-upgrade-611000" cluster
	I0819 11:19:52.746245   14672 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:19:52.746258   14672 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:19:52.746265   14672 cache.go:56] Caching tarball of preloaded images
	I0819 11:19:52.746318   14672 preload.go:172] Found /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:19:52.746327   14672 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:19:52.746379   14672 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/kubernetes-upgrade-611000/config.json ...
	I0819 11:19:52.746703   14672 start.go:360] acquireMachinesLock for kubernetes-upgrade-611000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:19:52.746733   14672 start.go:364] duration metric: took 22.334µs to acquireMachinesLock for "kubernetes-upgrade-611000"
	I0819 11:19:52.746742   14672 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:19:52.746747   14672 fix.go:54] fixHost starting: 
	I0819 11:19:52.746858   14672 fix.go:112] recreateIfNeeded on kubernetes-upgrade-611000: state=Stopped err=<nil>
	W0819 11:19:52.746865   14672 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:19:52.755194   14672 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-611000" ...
	I0819 11:19:52.759252   14672 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:19:52.759282   14672 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kubernetes-upgrade-611000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kubernetes-upgrade-611000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kubernetes-upgrade-611000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:b8:fa:53:cd:b9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kubernetes-upgrade-611000/disk.qcow2
	I0819 11:19:52.761100   14672 main.go:141] libmachine: STDOUT: 
	I0819 11:19:52.761119   14672 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:19:52.761144   14672 fix.go:56] duration metric: took 14.398333ms for fixHost
	I0819 11:19:52.761148   14672 start.go:83] releasing machines lock for "kubernetes-upgrade-611000", held for 14.411666ms
	W0819 11:19:52.761155   14672 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:19:52.761183   14672 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:19:52.761187   14672 start.go:729] Will try again in 5 seconds ...
	I0819 11:19:57.763298   14672 start.go:360] acquireMachinesLock for kubernetes-upgrade-611000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:19:57.763470   14672 start.go:364] duration metric: took 147.042µs to acquireMachinesLock for "kubernetes-upgrade-611000"
	I0819 11:19:57.763498   14672 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:19:57.763507   14672 fix.go:54] fixHost starting: 
	I0819 11:19:57.763833   14672 fix.go:112] recreateIfNeeded on kubernetes-upgrade-611000: state=Stopped err=<nil>
	W0819 11:19:57.763851   14672 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:19:57.768210   14672 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-611000" ...
	I0819 11:19:57.775128   14672 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:19:57.775240   14672 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kubernetes-upgrade-611000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kubernetes-upgrade-611000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kubernetes-upgrade-611000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:b8:fa:53:cd:b9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kubernetes-upgrade-611000/disk.qcow2
	I0819 11:19:57.778841   14672 main.go:141] libmachine: STDOUT: 
	I0819 11:19:57.778872   14672 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:19:57.778905   14672 fix.go:56] duration metric: took 15.398041ms for fixHost
	I0819 11:19:57.778912   14672 start.go:83] releasing machines lock for "kubernetes-upgrade-611000", held for 15.4315ms
	W0819 11:19:57.778984   14672 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-611000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-611000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:19:57.786060   14672 out.go:201] 
	W0819 11:19:57.790141   14672 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:19:57.790159   14672 out.go:270] * 
	* 
	W0819 11:19:57.790776   14672 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:19:57.800926   14672 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-611000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-611000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-611000 version --output=json: exit status 1 (31.47425ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-611000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-08-19 11:19:57.842374 -0700 PDT m=+867.101187168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-611000 -n kubernetes-upgrade-611000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-611000 -n kubernetes-upgrade-611000: exit status 7 (33.7945ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-611000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-611000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-611000
--- FAIL: TestKubernetesUpgrade (18.55s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.09s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19468
- KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current649365181/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.09s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.09s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19468
- KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current250973451/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.09s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (576.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1631537926 start -p stopped-upgrade-163000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1631537926 start -p stopped-upgrade-163000 --memory=2200 --vm-driver=qemu2 : (51.491126875s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1631537926 -p stopped-upgrade-163000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1631537926 -p stopped-upgrade-163000 stop: (3.080123708s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-163000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-163000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.33431875s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-163000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-163000" primary control-plane node in "stopped-upgrade-163000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-163000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:20:53.640698   14738 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:20:53.640841   14738 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:20:53.640848   14738 out.go:358] Setting ErrFile to fd 2...
	I0819 11:20:53.640851   14738 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:20:53.640981   14738 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:20:53.642075   14738 out.go:352] Setting JSON to false
	I0819 11:20:53.660190   14738 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6620,"bootTime":1724085033,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:20:53.660262   14738 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:20:53.665338   14738 out.go:177] * [stopped-upgrade-163000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:20:53.672222   14738 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 11:20:53.672271   14738 notify.go:220] Checking for updates...
	I0819 11:20:53.679355   14738 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:20:53.682247   14738 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:20:53.685359   14738 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:20:53.688365   14738 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	I0819 11:20:53.691333   14738 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:20:53.694565   14738 config.go:182] Loaded profile config "stopped-upgrade-163000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 11:20:53.698292   14738 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0819 11:20:53.701281   14738 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:20:53.705333   14738 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 11:20:53.711249   14738 start.go:297] selected driver: qemu2
	I0819 11:20:53.711255   14738 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-163000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52396 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-163000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0819 11:20:53.711325   14738 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:20:53.713803   14738 cni.go:84] Creating CNI manager for ""
	I0819 11:20:53.713820   14738 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:20:53.713838   14738 start.go:340] cluster config:
	{Name:stopped-upgrade-163000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52396 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-163000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0819 11:20:53.713890   14738 iso.go:125] acquiring lock: {Name:mk1182fa87ba49f1e009b3ded77c456c9e9e8e4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:20:53.721302   14738 out.go:177] * Starting "stopped-upgrade-163000" primary control-plane node in "stopped-upgrade-163000" cluster
	I0819 11:20:53.725291   14738 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0819 11:20:53.725324   14738 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0819 11:20:53.725338   14738 cache.go:56] Caching tarball of preloaded images
	I0819 11:20:53.725409   14738 preload.go:172] Found /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:20:53.725418   14738 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0819 11:20:53.725475   14738 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000/config.json ...
	I0819 11:20:53.725883   14738 start.go:360] acquireMachinesLock for stopped-upgrade-163000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:20:53.725917   14738 start.go:364] duration metric: took 29.167µs to acquireMachinesLock for "stopped-upgrade-163000"
	I0819 11:20:53.725927   14738 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:20:53.725933   14738 fix.go:54] fixHost starting: 
	I0819 11:20:53.726042   14738 fix.go:112] recreateIfNeeded on stopped-upgrade-163000: state=Stopped err=<nil>
	W0819 11:20:53.726051   14738 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:20:53.730341   14738 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-163000" ...
	I0819 11:20:53.734309   14738 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:20:53.734376   14738 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/stopped-upgrade-163000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/stopped-upgrade-163000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/stopped-upgrade-163000/qemu.pid -nic user,model=virtio,hostfwd=tcp::52361-:22,hostfwd=tcp::52362-:2376,hostname=stopped-upgrade-163000 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/stopped-upgrade-163000/disk.qcow2
	I0819 11:20:53.780832   14738 main.go:141] libmachine: STDOUT: 
	I0819 11:20:53.780852   14738 main.go:141] libmachine: STDERR: 
	I0819 11:20:53.780858   14738 main.go:141] libmachine: Waiting for VM to start (ssh -p 52361 docker@127.0.0.1)...
	I0819 11:21:13.343914   14738 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000/config.json ...
	I0819 11:21:13.344798   14738 machine.go:93] provisionDockerMachine start ...
	I0819 11:21:13.344957   14738 main.go:141] libmachine: Using SSH client type: native
	I0819 11:21:13.345520   14738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a885a0] 0x104a8ae00 <nil>  [] 0s} localhost 52361 <nil> <nil>}
	I0819 11:21:13.345535   14738 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 11:21:13.416619   14738 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 11:21:13.416642   14738 buildroot.go:166] provisioning hostname "stopped-upgrade-163000"
	I0819 11:21:13.416722   14738 main.go:141] libmachine: Using SSH client type: native
	I0819 11:21:13.416887   14738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a885a0] 0x104a8ae00 <nil>  [] 0s} localhost 52361 <nil> <nil>}
	I0819 11:21:13.416896   14738 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-163000 && echo "stopped-upgrade-163000" | sudo tee /etc/hostname
	I0819 11:21:13.476766   14738 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-163000
	
	I0819 11:21:13.476822   14738 main.go:141] libmachine: Using SSH client type: native
	I0819 11:21:13.476975   14738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a885a0] 0x104a8ae00 <nil>  [] 0s} localhost 52361 <nil> <nil>}
	I0819 11:21:13.476989   14738 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-163000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-163000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-163000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 11:21:13.533847   14738 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 11:21:13.533864   14738 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19468-11838/.minikube CaCertPath:/Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19468-11838/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19468-11838/.minikube}
	I0819 11:21:13.533872   14738 buildroot.go:174] setting up certificates
	I0819 11:21:13.533880   14738 provision.go:84] configureAuth start
	I0819 11:21:13.533885   14738 provision.go:143] copyHostCerts
	I0819 11:21:13.533958   14738 exec_runner.go:144] found /Users/jenkins/minikube-integration/19468-11838/.minikube/key.pem, removing ...
	I0819 11:21:13.533963   14738 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19468-11838/.minikube/key.pem
	I0819 11:21:13.534061   14738 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19468-11838/.minikube/key.pem (1675 bytes)
	I0819 11:21:13.534231   14738 exec_runner.go:144] found /Users/jenkins/minikube-integration/19468-11838/.minikube/ca.pem, removing ...
	I0819 11:21:13.534234   14738 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19468-11838/.minikube/ca.pem
	I0819 11:21:13.534285   14738 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19468-11838/.minikube/ca.pem (1082 bytes)
	I0819 11:21:13.534387   14738 exec_runner.go:144] found /Users/jenkins/minikube-integration/19468-11838/.minikube/cert.pem, removing ...
	I0819 11:21:13.534391   14738 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19468-11838/.minikube/cert.pem
	I0819 11:21:13.534437   14738 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19468-11838/.minikube/cert.pem (1123 bytes)
	I0819 11:21:13.534553   14738 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-163000 san=[127.0.0.1 localhost minikube stopped-upgrade-163000]
	I0819 11:21:13.618788   14738 provision.go:177] copyRemoteCerts
	I0819 11:21:13.618826   14738 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 11:21:13.618833   14738 sshutil.go:53] new ssh client: &{IP:localhost Port:52361 SSHKeyPath:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/stopped-upgrade-163000/id_rsa Username:docker}
	I0819 11:21:13.647412   14738 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 11:21:13.654583   14738 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0819 11:21:13.661833   14738 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 11:21:13.668517   14738 provision.go:87] duration metric: took 134.633292ms to configureAuth
	I0819 11:21:13.668526   14738 buildroot.go:189] setting minikube options for container-runtime
	I0819 11:21:13.668628   14738 config.go:182] Loaded profile config "stopped-upgrade-163000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 11:21:13.668682   14738 main.go:141] libmachine: Using SSH client type: native
	I0819 11:21:13.668772   14738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a885a0] 0x104a8ae00 <nil>  [] 0s} localhost 52361 <nil> <nil>}
	I0819 11:21:13.668776   14738 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 11:21:13.723532   14738 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 11:21:13.723542   14738 buildroot.go:70] root file system type: tmpfs
	I0819 11:21:13.723591   14738 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 11:21:13.723636   14738 main.go:141] libmachine: Using SSH client type: native
	I0819 11:21:13.723744   14738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a885a0] 0x104a8ae00 <nil>  [] 0s} localhost 52361 <nil> <nil>}
	I0819 11:21:13.723803   14738 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 11:21:13.781149   14738 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 11:21:13.781206   14738 main.go:141] libmachine: Using SSH client type: native
	I0819 11:21:13.781327   14738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a885a0] 0x104a8ae00 <nil>  [] 0s} localhost 52361 <nil> <nil>}
	I0819 11:21:13.781334   14738 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 11:21:14.105609   14738 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 11:21:14.105622   14738 machine.go:96] duration metric: took 760.814375ms to provisionDockerMachine
	I0819 11:21:14.105633   14738 start.go:293] postStartSetup for "stopped-upgrade-163000" (driver="qemu2")
	I0819 11:21:14.105641   14738 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 11:21:14.105718   14738 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 11:21:14.105742   14738 sshutil.go:53] new ssh client: &{IP:localhost Port:52361 SSHKeyPath:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/stopped-upgrade-163000/id_rsa Username:docker}
	I0819 11:21:14.134990   14738 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 11:21:14.136266   14738 info.go:137] Remote host: Buildroot 2021.02.12
	I0819 11:21:14.136275   14738 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19468-11838/.minikube/addons for local assets ...
	I0819 11:21:14.136375   14738 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19468-11838/.minikube/files for local assets ...
	I0819 11:21:14.136497   14738 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19468-11838/.minikube/files/etc/ssl/certs/123172.pem -> 123172.pem in /etc/ssl/certs
	I0819 11:21:14.136628   14738 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 11:21:14.139153   14738 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/files/etc/ssl/certs/123172.pem --> /etc/ssl/certs/123172.pem (1708 bytes)
	I0819 11:21:14.146203   14738 start.go:296] duration metric: took 40.5645ms for postStartSetup
	I0819 11:21:14.146218   14738 fix.go:56] duration metric: took 20.42039125s for fixHost
	I0819 11:21:14.146251   14738 main.go:141] libmachine: Using SSH client type: native
	I0819 11:21:14.146354   14738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a885a0] 0x104a8ae00 <nil>  [] 0s} localhost 52361 <nil> <nil>}
	I0819 11:21:14.146359   14738 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 11:21:14.198480   14738 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724091674.607215504
	
	I0819 11:21:14.198488   14738 fix.go:216] guest clock: 1724091674.607215504
	I0819 11:21:14.198492   14738 fix.go:229] Guest: 2024-08-19 11:21:14.607215504 -0700 PDT Remote: 2024-08-19 11:21:14.146219 -0700 PDT m=+20.526484792 (delta=460.996504ms)
	I0819 11:21:14.198502   14738 fix.go:200] guest clock delta is within tolerance: 460.996504ms
	I0819 11:21:14.198505   14738 start.go:83] releasing machines lock for "stopped-upgrade-163000", held for 20.4726885s
	I0819 11:21:14.198565   14738 ssh_runner.go:195] Run: cat /version.json
	I0819 11:21:14.198577   14738 sshutil.go:53] new ssh client: &{IP:localhost Port:52361 SSHKeyPath:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/stopped-upgrade-163000/id_rsa Username:docker}
	I0819 11:21:14.198629   14738 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 11:21:14.198666   14738 sshutil.go:53] new ssh client: &{IP:localhost Port:52361 SSHKeyPath:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/stopped-upgrade-163000/id_rsa Username:docker}
	W0819 11:21:14.227267   14738 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0819 11:21:14.227315   14738 ssh_runner.go:195] Run: systemctl --version
	I0819 11:21:14.229025   14738 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 11:21:14.230551   14738 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 11:21:14.230577   14738 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0819 11:21:14.233449   14738 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0819 11:21:14.237797   14738 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 11:21:14.237806   14738 start.go:495] detecting cgroup driver to use...
	I0819 11:21:14.237870   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 11:21:14.244734   14738 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0819 11:21:14.247563   14738 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 11:21:14.250820   14738 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 11:21:14.250852   14738 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 11:21:14.254035   14738 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 11:21:14.256800   14738 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 11:21:14.259492   14738 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 11:21:14.262870   14738 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 11:21:14.266172   14738 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 11:21:14.269099   14738 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 11:21:14.271797   14738 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 11:21:14.275040   14738 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 11:21:14.278162   14738 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 11:21:14.280728   14738 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:21:14.356513   14738 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 11:21:14.365354   14738 start.go:495] detecting cgroup driver to use...
	I0819 11:21:14.365433   14738 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 11:21:14.372853   14738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 11:21:14.416853   14738 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 11:21:14.423454   14738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 11:21:14.428666   14738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 11:21:14.433218   14738 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 11:21:14.490565   14738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 11:21:14.496046   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 11:21:14.502094   14738 ssh_runner.go:195] Run: which cri-dockerd
	I0819 11:21:14.503444   14738 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 11:21:14.506534   14738 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0819 11:21:14.511667   14738 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 11:21:14.574167   14738 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 11:21:14.651433   14738 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 11:21:14.651497   14738 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 11:21:14.656854   14738 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:21:14.722883   14738 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 11:21:15.877407   14738 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.154512625s)
	I0819 11:21:15.877464   14738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0819 11:21:15.881859   14738 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0819 11:21:15.888548   14738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 11:21:15.893293   14738 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0819 11:21:15.953898   14738 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0819 11:21:16.012943   14738 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:21:16.080660   14738 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0819 11:21:16.086556   14738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 11:21:16.090915   14738 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:21:16.153616   14738 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0819 11:21:16.191491   14738 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0819 11:21:16.191570   14738 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0819 11:21:16.193788   14738 start.go:563] Will wait 60s for crictl version
	I0819 11:21:16.193844   14738 ssh_runner.go:195] Run: which crictl
	I0819 11:21:16.195256   14738 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 11:21:16.210667   14738 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0819 11:21:16.210736   14738 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 11:21:16.226895   14738 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 11:21:16.248143   14738 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0819 11:21:16.248260   14738 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0819 11:21:16.249548   14738 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 11:21:16.253120   14738 kubeadm.go:883] updating cluster {Name:stopped-upgrade-163000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52396 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-163000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0819 11:21:16.253174   14738 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0819 11:21:16.253211   14738 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 11:21:16.263409   14738 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0819 11:21:16.263423   14738 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0819 11:21:16.263467   14738 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0819 11:21:16.266886   14738 ssh_runner.go:195] Run: which lz4
	I0819 11:21:16.268227   14738 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 11:21:16.269499   14738 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 11:21:16.269510   14738 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0819 11:21:17.174558   14738 docker.go:649] duration metric: took 906.36575ms to copy over tarball
	I0819 11:21:17.174622   14738 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 11:21:18.350299   14738 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.17567075s)
	I0819 11:21:18.350314   14738 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 11:21:18.365835   14738 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0819 11:21:18.368819   14738 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0819 11:21:18.373886   14738 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:21:18.431240   14738 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 11:21:19.944169   14738 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.512916333s)
	I0819 11:21:19.944253   14738 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 11:21:19.956992   14738 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0819 11:21:19.957002   14738 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0819 11:21:19.957007   14738 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 11:21:19.961046   14738 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:21:19.962564   14738 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0819 11:21:19.964659   14738 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0819 11:21:19.964733   14738 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:21:19.966596   14738 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0819 11:21:19.966711   14738 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 11:21:19.968108   14738 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0819 11:21:19.968124   14738 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0819 11:21:19.969316   14738 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 11:21:19.969345   14738 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0819 11:21:19.970544   14738 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 11:21:19.970547   14738 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0819 11:21:19.972103   14738 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0819 11:21:19.972144   14738 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0819 11:21:19.972951   14738 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 11:21:19.974095   14738 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0819 11:21:20.408624   14738 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0819 11:21:20.420567   14738 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0819 11:21:20.420594   14738 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0819 11:21:20.420640   14738 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0819 11:21:20.421573   14738 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 11:21:20.421758   14738 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0819 11:21:20.426692   14738 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0819 11:21:20.436589   14738 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0819 11:21:20.439780   14738 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0819 11:21:20.449501   14738 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0819 11:21:20.449523   14738 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 11:21:20.449575   14738 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 11:21:20.449584   14738 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0819 11:21:20.449630   14738 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0819 11:21:20.449648   14738 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0819 11:21:20.449663   14738 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0819 11:21:20.449676   14738 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0819 11:21:20.449716   14738 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	W0819 11:21:20.456102   14738 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0819 11:21:20.456223   14738 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0819 11:21:20.461296   14738 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0819 11:21:20.461312   14738 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0819 11:21:20.461361   14738 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0819 11:21:20.467064   14738 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0819 11:21:20.482781   14738 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0819 11:21:20.482844   14738 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0819 11:21:20.482879   14738 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0819 11:21:20.490888   14738 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0819 11:21:20.490908   14738 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 11:21:20.490962   14738 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0819 11:21:20.490980   14738 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0819 11:21:20.491078   14738 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0819 11:21:20.492860   14738 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0819 11:21:20.492877   14738 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0819 11:21:20.492909   14738 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0819 11:21:20.508625   14738 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0819 11:21:20.508646   14738 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0819 11:21:20.508658   14738 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0819 11:21:20.508677   14738 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0819 11:21:20.508744   14738 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0819 11:21:20.508766   14738 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0819 11:21:20.521632   14738 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0819 11:21:20.521661   14738 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0819 11:21:20.521699   14738 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0819 11:21:20.521711   14738 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0819 11:21:20.555957   14738 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0819 11:21:20.555973   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0819 11:21:20.564500   14738 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0819 11:21:20.564618   14738 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:21:20.630781   14738 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0819 11:21:20.630809   14738 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:21:20.630834   14738 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0819 11:21:20.630869   14738 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:21:20.633175   14738 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0819 11:21:20.633189   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0819 11:21:20.667580   14738 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0819 11:21:20.667711   14738 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0819 11:21:20.745002   14738 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0819 11:21:20.745016   14738 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0819 11:21:20.745048   14738 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0819 11:21:20.825107   14738 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0819 11:21:20.825176   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0819 11:21:21.143522   14738 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0819 11:21:21.143544   14738 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0819 11:21:21.143550   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0819 11:21:21.292005   14738 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0819 11:21:21.292043   14738 cache_images.go:92] duration metric: took 1.33503s to LoadCachedImages
	W0819 11:21:21.292091   14738 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0819 11:21:21.292097   14738 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0819 11:21:21.292153   14738 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-163000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-163000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 11:21:21.292215   14738 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0819 11:21:21.305723   14738 cni.go:84] Creating CNI manager for ""
	I0819 11:21:21.305736   14738 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:21:21.305741   14738 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 11:21:21.305750   14738 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-163000 NodeName:stopped-upgrade-163000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 11:21:21.305814   14738 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-163000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 11:21:21.305880   14738 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0819 11:21:21.308771   14738 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 11:21:21.308806   14738 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 11:21:21.311289   14738 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0819 11:21:21.316368   14738 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 11:21:21.320879   14738 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0819 11:21:21.326103   14738 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0819 11:21:21.327362   14738 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 11:21:21.330979   14738 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:21:21.394113   14738 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 11:21:21.401325   14738 certs.go:68] Setting up /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000 for IP: 10.0.2.15
	I0819 11:21:21.401334   14738 certs.go:194] generating shared ca certs ...
	I0819 11:21:21.401342   14738 certs.go:226] acquiring lock for ca certs: {Name:mka749b3c39f634f903dfb76b75647518084e393 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:21:21.401509   14738 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19468-11838/.minikube/ca.key
	I0819 11:21:21.401564   14738 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19468-11838/.minikube/proxy-client-ca.key
	I0819 11:21:21.401570   14738 certs.go:256] generating profile certs ...
	I0819 11:21:21.401643   14738 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000/client.key
	I0819 11:21:21.401661   14738 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000/apiserver.key.17ddb0fc
	I0819 11:21:21.401673   14738 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000/apiserver.crt.17ddb0fc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0819 11:21:21.485600   14738 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000/apiserver.crt.17ddb0fc ...
	I0819 11:21:21.485613   14738 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000/apiserver.crt.17ddb0fc: {Name:mk6dc61fc842d4303f5e2be91343e2942c462b42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:21:21.485910   14738 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000/apiserver.key.17ddb0fc ...
	I0819 11:21:21.485923   14738 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000/apiserver.key.17ddb0fc: {Name:mkd32adbd348a4236fe43d6c4009602ecea8788e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:21:21.486057   14738 certs.go:381] copying /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000/apiserver.crt.17ddb0fc -> /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000/apiserver.crt
	I0819 11:21:21.486193   14738 certs.go:385] copying /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000/apiserver.key.17ddb0fc -> /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000/apiserver.key
	I0819 11:21:21.486414   14738 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000/proxy-client.key
	I0819 11:21:21.486549   14738 certs.go:484] found cert: /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/12317.pem (1338 bytes)
	W0819 11:21:21.486580   14738 certs.go:480] ignoring /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/12317_empty.pem, impossibly tiny 0 bytes
	I0819 11:21:21.486590   14738 certs.go:484] found cert: /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 11:21:21.486610   14738 certs.go:484] found cert: /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem (1082 bytes)
	I0819 11:21:21.486641   14738 certs.go:484] found cert: /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem (1123 bytes)
	I0819 11:21:21.486664   14738 certs.go:484] found cert: /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/key.pem (1675 bytes)
	I0819 11:21:21.486702   14738 certs.go:484] found cert: /Users/jenkins/minikube-integration/19468-11838/.minikube/files/etc/ssl/certs/123172.pem (1708 bytes)
	I0819 11:21:21.487049   14738 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 11:21:21.494237   14738 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 11:21:21.500733   14738 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 11:21:21.507366   14738 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 11:21:21.514750   14738 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0819 11:21:21.522307   14738 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 11:21:21.528992   14738 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 11:21:21.535401   14738 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 11:21:21.542505   14738 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/12317.pem --> /usr/share/ca-certificates/12317.pem (1338 bytes)
	I0819 11:21:21.549564   14738 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/files/etc/ssl/certs/123172.pem --> /usr/share/ca-certificates/123172.pem (1708 bytes)
	I0819 11:21:21.556100   14738 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19468-11838/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 11:21:21.562824   14738 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 11:21:21.567997   14738 ssh_runner.go:195] Run: openssl version
	I0819 11:21:21.569850   14738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12317.pem && ln -fs /usr/share/ca-certificates/12317.pem /etc/ssl/certs/12317.pem"
	I0819 11:21:21.572667   14738 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12317.pem
	I0819 11:21:21.574106   14738 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 18:06 /usr/share/ca-certificates/12317.pem
	I0819 11:21:21.574126   14738 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12317.pem
	I0819 11:21:21.576025   14738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12317.pem /etc/ssl/certs/51391683.0"
	I0819 11:21:21.579247   14738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/123172.pem && ln -fs /usr/share/ca-certificates/123172.pem /etc/ssl/certs/123172.pem"
	I0819 11:21:21.582652   14738 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123172.pem
	I0819 11:21:21.584097   14738 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 18:06 /usr/share/ca-certificates/123172.pem
	I0819 11:21:21.584118   14738 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123172.pem
	I0819 11:21:21.585894   14738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/123172.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 11:21:21.588811   14738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 11:21:21.591553   14738 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:21:21.593133   14738 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:21:21.593156   14738 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:21:21.594932   14738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 11:21:21.598296   14738 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 11:21:21.599725   14738 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 11:21:21.601858   14738 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 11:21:21.603699   14738 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 11:21:21.605643   14738 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 11:21:21.607465   14738 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 11:21:21.609298   14738 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 11:21:21.611102   14738 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-163000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52396 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-163000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0819 11:21:21.611162   14738 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0819 11:21:21.625439   14738 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 11:21:21.628396   14738 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 11:21:21.628402   14738 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 11:21:21.628422   14738 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 11:21:21.631418   14738 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 11:21:21.631710   14738 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-163000" does not appear in /Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:21:21.631811   14738 kubeconfig.go:62] /Users/jenkins/minikube-integration/19468-11838/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-163000" cluster setting kubeconfig missing "stopped-upgrade-163000" context setting]
	I0819 11:21:21.631991   14738 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19468-11838/kubeconfig: {Name:mkf06e67426049c2259f6e46b5143872117d8aeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:21:21.632422   14738 kapi.go:59] client config for stopped-upgrade-163000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000/client.key", CAFile:"/Users/jenkins/minikube-integration/19468-11838/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106043d10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 11:21:21.632745   14738 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 11:21:21.635200   14738 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-163000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0819 11:21:21.635206   14738 kubeadm.go:1160] stopping kube-system containers ...
	I0819 11:21:21.635239   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0819 11:21:21.645979   14738 docker.go:483] Stopping containers: [cba74a0177d5 bd9cc3b824ba e664d2838747 5b1fce91598f 70ca7c1620fa c9b1bc8e1717 b0d0e25e65a0 0be0dd934796]
	I0819 11:21:21.646042   14738 ssh_runner.go:195] Run: docker stop cba74a0177d5 bd9cc3b824ba e664d2838747 5b1fce91598f 70ca7c1620fa c9b1bc8e1717 b0d0e25e65a0 0be0dd934796
	I0819 11:21:21.656717   14738 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 11:21:21.662182   14738 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 11:21:21.664830   14738 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 11:21:21.664835   14738 kubeadm.go:157] found existing configuration files:
	
	I0819 11:21:21.664855   14738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52396 /etc/kubernetes/admin.conf
	I0819 11:21:21.667205   14738 kubeadm.go:163] "https://control-plane.minikube.internal:52396" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52396 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 11:21:21.667222   14738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 11:21:21.670200   14738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52396 /etc/kubernetes/kubelet.conf
	I0819 11:21:21.672908   14738 kubeadm.go:163] "https://control-plane.minikube.internal:52396" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52396 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 11:21:21.672935   14738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 11:21:21.675430   14738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52396 /etc/kubernetes/controller-manager.conf
	I0819 11:21:21.678465   14738 kubeadm.go:163] "https://control-plane.minikube.internal:52396" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52396 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 11:21:21.678488   14738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 11:21:21.681321   14738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52396 /etc/kubernetes/scheduler.conf
	I0819 11:21:21.683598   14738 kubeadm.go:163] "https://control-plane.minikube.internal:52396" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52396 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 11:21:21.683619   14738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 11:21:21.686775   14738 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 11:21:21.689974   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 11:21:21.712534   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 11:21:22.331606   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 11:21:22.452432   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 11:21:22.479054   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 11:21:22.507688   14738 api_server.go:52] waiting for apiserver process to appear ...
	I0819 11:21:22.507770   14738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:21:23.009875   14738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:21:23.509840   14738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:21:23.514452   14738 api_server.go:72] duration metric: took 1.006770417s to wait for apiserver process to appear ...
	I0819 11:21:23.514462   14738 api_server.go:88] waiting for apiserver healthz status ...
	I0819 11:21:23.514470   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:21:28.516612   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:21:28.516658   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:21:33.517042   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:21:33.517096   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:21:38.517582   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:21:38.517655   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:21:43.518520   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:21:43.518553   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:21:48.519698   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:21:48.519719   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:21:53.520711   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:21:53.520737   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:21:58.522009   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:21:58.522030   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:22:03.523623   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:22:03.523662   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:22:08.525765   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:22:08.525841   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:22:13.528327   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:22:13.528368   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:22:18.530656   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:22:18.530720   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:22:23.533096   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:22:23.533212   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:22:23.546347   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:22:23.546414   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:22:23.556734   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:22:23.556801   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:22:23.567587   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:22:23.567651   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:22:23.578104   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:22:23.578176   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:22:23.592357   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:22:23.592419   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:22:23.602938   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:22:23.602996   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:22:23.613378   14738 logs.go:276] 0 containers: []
	W0819 11:22:23.613389   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:22:23.613437   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:22:23.628484   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:22:23.628506   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:22:23.628512   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:22:23.652206   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:22:23.652217   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:22:23.663784   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:22:23.663794   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:22:23.779883   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:22:23.779894   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:22:23.808140   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:22:23.808151   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:22:23.820046   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:22:23.820056   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:22:23.859096   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:22:23.859113   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:22:23.864559   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:22:23.864567   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:22:23.883597   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:22:23.883609   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:22:23.895348   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:22:23.895360   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:22:23.913614   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:22:23.913624   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:22:23.928314   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:22:23.928325   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:22:23.940202   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:22:23.940213   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:22:23.966104   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:22:23.966112   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:22:23.977651   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:22:23.977666   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:22:23.991125   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:22:23.991136   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:22:26.507304   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:22:31.510049   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:22:31.510210   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:22:31.528506   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:22:31.528592   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:22:31.540294   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:22:31.540366   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:22:31.551455   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:22:31.551522   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:22:31.561773   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:22:31.561839   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:22:31.573082   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:22:31.573151   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:22:31.583305   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:22:31.583371   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:22:31.593891   14738 logs.go:276] 0 containers: []
	W0819 11:22:31.593900   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:22:31.593967   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:22:31.604262   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:22:31.604280   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:22:31.604285   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:22:31.608330   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:22:31.608337   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:22:31.646259   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:22:31.646273   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:22:31.661751   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:22:31.661763   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:22:31.675670   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:22:31.675682   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:22:31.697007   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:22:31.697017   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:22:31.723193   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:22:31.723203   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:22:31.748607   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:22:31.748615   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:22:31.759751   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:22:31.759765   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:22:31.797418   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:22:31.797426   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:22:31.811567   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:22:31.811579   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:22:31.825749   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:22:31.825758   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:22:31.836458   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:22:31.836469   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:22:31.857707   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:22:31.857717   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:22:31.884543   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:22:31.884553   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:22:31.896304   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:22:31.896320   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:22:34.411612   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:22:39.413919   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:22:39.414025   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:22:39.425724   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:22:39.425798   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:22:39.436467   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:22:39.436530   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:22:39.446910   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:22:39.446990   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:22:39.460952   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:22:39.461033   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:22:39.476265   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:22:39.476334   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:22:39.486726   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:22:39.486790   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:22:39.496887   14738 logs.go:276] 0 containers: []
	W0819 11:22:39.496900   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:22:39.496955   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:22:39.508899   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:22:39.508915   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:22:39.508920   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:22:39.521844   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:22:39.521853   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:22:39.526469   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:22:39.526477   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:22:39.540531   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:22:39.540542   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:22:39.554669   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:22:39.554679   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:22:39.569108   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:22:39.569124   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:22:39.596555   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:22:39.596567   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:22:39.613670   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:22:39.613680   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:22:39.639673   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:22:39.639681   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:22:39.676933   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:22:39.676949   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:22:39.691869   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:22:39.691880   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:22:39.717050   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:22:39.717061   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:22:39.728215   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:22:39.728224   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:22:39.740799   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:22:39.740812   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:22:39.778245   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:22:39.778254   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:22:39.789536   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:22:39.789549   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:22:42.303755   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:22:47.306012   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:22:47.306115   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:22:47.320735   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:22:47.320808   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:22:47.332287   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:22:47.332352   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:22:47.342555   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:22:47.342628   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:22:47.353059   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:22:47.353125   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:22:47.363453   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:22:47.363515   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:22:47.377004   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:22:47.377075   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:22:47.386864   14738 logs.go:276] 0 containers: []
	W0819 11:22:47.386878   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:22:47.386931   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:22:47.402721   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:22:47.402737   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:22:47.402743   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:22:47.414825   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:22:47.414839   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:22:47.460076   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:22:47.460087   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:22:47.484824   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:22:47.484835   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:22:47.523125   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:22:47.523133   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:22:47.563024   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:22:47.563035   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:22:47.584050   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:22:47.584061   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:22:47.615770   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:22:47.615781   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:22:47.627493   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:22:47.627505   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:22:47.640376   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:22:47.640389   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:22:47.644948   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:22:47.644957   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:22:47.659719   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:22:47.659731   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:22:47.679103   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:22:47.679120   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:22:47.693186   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:22:47.693196   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:22:47.703926   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:22:47.703937   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:22:47.716272   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:22:47.716284   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:22:50.236696   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:22:55.238043   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:22:55.238198   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:22:55.250583   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:22:55.250660   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:22:55.263049   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:22:55.263118   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:22:55.273290   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:22:55.273357   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:22:55.283879   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:22:55.283955   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:22:55.294342   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:22:55.294402   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:22:55.304845   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:22:55.304904   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:22:55.319626   14738 logs.go:276] 0 containers: []
	W0819 11:22:55.319639   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:22:55.319693   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:22:55.329918   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:22:55.329935   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:22:55.329941   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:22:55.341621   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:22:55.341632   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:22:55.355988   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:22:55.355999   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:22:55.394163   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:22:55.394173   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:22:55.428500   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:22:55.428511   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:22:55.442782   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:22:55.442793   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:22:55.463906   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:22:55.463917   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:22:55.476723   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:22:55.476735   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:22:55.502227   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:22:55.502238   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:22:55.516823   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:22:55.516832   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:22:55.539024   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:22:55.539036   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:22:55.552596   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:22:55.552606   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:22:55.569736   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:22:55.569746   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:22:55.581768   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:22:55.581779   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:22:55.585707   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:22:55.585715   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:22:55.597462   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:22:55.597473   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:22:58.123273   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:23:03.125612   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:23:03.125867   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:23:03.150650   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:23:03.150742   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:23:03.170587   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:23:03.170659   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:23:03.182737   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:23:03.182798   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:23:03.193753   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:23:03.193820   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:23:03.204584   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:23:03.204658   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:23:03.215833   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:23:03.215900   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:23:03.226406   14738 logs.go:276] 0 containers: []
	W0819 11:23:03.226418   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:23:03.226476   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:23:03.236880   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:23:03.236896   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:23:03.236902   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:23:03.250862   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:23:03.250871   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:23:03.262307   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:23:03.262318   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:23:03.277245   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:23:03.277257   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:23:03.297219   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:23:03.297229   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:23:03.309813   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:23:03.309824   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:23:03.331642   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:23:03.331653   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:23:03.336453   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:23:03.336460   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:23:03.354740   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:23:03.354751   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:23:03.366659   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:23:03.366671   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:23:03.378782   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:23:03.378795   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:23:03.416764   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:23:03.416776   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:23:03.450973   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:23:03.451000   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:23:03.465206   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:23:03.465217   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:23:03.489576   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:23:03.489586   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:23:03.514812   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:23:03.514822   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:23:06.029179   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:23:11.030138   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:23:11.030400   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:23:11.057783   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:23:11.057900   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:23:11.075781   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:23:11.075864   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:23:11.088905   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:23:11.088980   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:23:11.100206   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:23:11.100278   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:23:11.118516   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:23:11.118579   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:23:11.136209   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:23:11.136275   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:23:11.146830   14738 logs.go:276] 0 containers: []
	W0819 11:23:11.146844   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:23:11.146897   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:23:11.157123   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:23:11.157143   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:23:11.157150   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:23:11.168844   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:23:11.168855   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:23:11.190659   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:23:11.190669   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:23:11.201974   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:23:11.201986   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:23:11.237140   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:23:11.237153   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:23:11.241391   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:23:11.241399   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:23:11.260153   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:23:11.260163   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:23:11.273861   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:23:11.273871   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:23:11.284906   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:23:11.284915   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:23:11.303401   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:23:11.303412   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:23:11.315707   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:23:11.315716   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:23:11.352664   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:23:11.352670   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:23:11.377526   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:23:11.377537   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:23:11.392368   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:23:11.392382   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:23:11.403869   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:23:11.403880   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:23:11.424873   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:23:11.424885   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:23:13.951928   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:23:18.954255   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:23:18.954518   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:23:18.977297   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:23:18.977402   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:23:18.996260   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:23:18.996340   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:23:19.008251   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:23:19.008321   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:23:19.018684   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:23:19.018745   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:23:19.029932   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:23:19.029993   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:23:19.040852   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:23:19.040912   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:23:19.051327   14738 logs.go:276] 0 containers: []
	W0819 11:23:19.051338   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:23:19.051396   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:23:19.061890   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:23:19.061906   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:23:19.061911   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:23:19.100030   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:23:19.100038   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:23:19.115758   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:23:19.115770   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:23:19.141487   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:23:19.141497   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:23:19.145881   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:23:19.145888   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:23:19.171905   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:23:19.171916   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:23:19.187666   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:23:19.187680   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:23:19.205500   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:23:19.205512   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:23:19.251054   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:23:19.251066   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:23:19.265821   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:23:19.265832   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:23:19.278889   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:23:19.278903   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:23:19.290725   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:23:19.290739   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:23:19.311596   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:23:19.311607   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:23:19.326567   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:23:19.326578   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:23:19.338953   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:23:19.338963   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:23:19.360362   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:23:19.360373   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:23:21.873747   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:23:26.875964   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:23:26.876224   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:23:26.902018   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:23:26.902130   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:23:26.916403   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:23:26.916488   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:23:26.929782   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:23:26.929851   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:23:26.940238   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:23:26.940302   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:23:26.950905   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:23:26.950968   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:23:26.961989   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:23:26.962051   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:23:26.972013   14738 logs.go:276] 0 containers: []
	W0819 11:23:26.972024   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:23:26.972082   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:23:26.982193   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:23:26.982210   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:23:26.982216   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:23:26.997233   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:23:26.997245   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:23:27.022230   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:23:27.022240   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:23:27.035037   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:23:27.035049   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:23:27.072192   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:23:27.072206   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:23:27.087290   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:23:27.087300   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:23:27.098823   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:23:27.098836   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:23:27.110744   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:23:27.110754   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:23:27.122385   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:23:27.122398   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:23:27.126773   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:23:27.126779   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:23:27.144016   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:23:27.144029   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:23:27.156175   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:23:27.156186   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:23:27.170261   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:23:27.170279   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:23:27.196037   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:23:27.196048   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:23:27.221181   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:23:27.221192   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:23:27.232922   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:23:27.232933   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:23:29.773331   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:23:34.774041   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:23:34.774337   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:23:34.801037   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:23:34.801161   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:23:34.821130   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:23:34.821213   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:23:34.833561   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:23:34.833637   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:23:34.844503   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:23:34.844576   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:23:34.854964   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:23:34.855033   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:23:34.870024   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:23:34.870091   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:23:34.880549   14738 logs.go:276] 0 containers: []
	W0819 11:23:34.880560   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:23:34.880612   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:23:34.890950   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:23:34.890967   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:23:34.890972   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:23:34.895674   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:23:34.895681   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:23:34.907314   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:23:34.907326   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:23:34.920081   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:23:34.920092   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:23:34.932697   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:23:34.932708   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:23:34.962301   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:23:34.962308   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:23:34.976268   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:23:34.976278   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:23:35.001427   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:23:35.001437   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:23:35.012774   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:23:35.012785   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:23:35.034988   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:23:35.034999   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:23:35.047819   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:23:35.047829   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:23:35.059691   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:23:35.059702   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:23:35.096752   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:23:35.096764   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:23:35.111423   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:23:35.111432   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:23:35.134165   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:23:35.134180   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:23:35.173706   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:23:35.173716   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:23:37.689676   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:23:42.692026   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:23:42.692224   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:23:42.711456   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:23:42.711543   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:23:42.722614   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:23:42.722679   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:23:42.733559   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:23:42.733627   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:23:42.744024   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:23:42.744098   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:23:42.755024   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:23:42.755089   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:23:42.765856   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:23:42.765928   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:23:42.776445   14738 logs.go:276] 0 containers: []
	W0819 11:23:42.776455   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:23:42.776513   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:23:42.786856   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:23:42.786872   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:23:42.786877   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:23:42.791261   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:23:42.791270   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:23:42.827396   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:23:42.827411   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:23:42.842028   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:23:42.842040   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:23:42.873449   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:23:42.873462   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:23:42.897771   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:23:42.897779   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:23:42.909195   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:23:42.909206   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:23:42.935880   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:23:42.935892   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:23:42.947425   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:23:42.947438   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:23:42.984578   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:23:42.984586   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:23:42.995818   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:23:42.995829   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:23:43.007019   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:23:43.007029   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:23:43.025042   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:23:43.025056   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:23:43.039474   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:23:43.039487   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:23:43.053873   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:23:43.053886   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:23:43.071705   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:23:43.071715   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:23:45.588118   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:23:50.590694   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:23:50.591000   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:23:50.618615   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:23:50.618746   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:23:50.639591   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:23:50.639670   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:23:50.653379   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:23:50.653455   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:23:50.669155   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:23:50.669227   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:23:50.679455   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:23:50.679526   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:23:50.690105   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:23:50.690180   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:23:50.700779   14738 logs.go:276] 0 containers: []
	W0819 11:23:50.700790   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:23:50.700849   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:23:50.712300   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:23:50.712324   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:23:50.712331   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:23:50.723846   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:23:50.723859   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:23:50.728441   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:23:50.728449   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:23:50.763154   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:23:50.763167   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:23:50.777116   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:23:50.777127   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:23:50.788729   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:23:50.788743   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:23:50.811630   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:23:50.811637   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:23:50.823109   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:23:50.823120   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:23:50.844261   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:23:50.844275   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:23:50.859006   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:23:50.859019   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:23:50.872358   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:23:50.872369   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:23:50.900607   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:23:50.900618   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:23:50.914445   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:23:50.914455   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:23:50.933764   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:23:50.933774   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:23:50.971903   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:23:50.971915   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:23:50.985851   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:23:50.985863   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:23:53.497951   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:23:58.500245   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:23:58.500596   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:23:58.533472   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:23:58.533604   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:23:58.552452   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:23:58.552547   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:23:58.566391   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:23:58.566466   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:23:58.578160   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:23:58.578231   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:23:58.588251   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:23:58.588330   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:23:58.598734   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:23:58.598800   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:23:58.608620   14738 logs.go:276] 0 containers: []
	W0819 11:23:58.608632   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:23:58.608689   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:23:58.619275   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:23:58.619293   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:23:58.619299   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:23:58.623926   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:23:58.623933   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:23:58.658316   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:23:58.658327   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:23:58.672957   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:23:58.672967   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:23:58.690819   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:23:58.690829   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:23:58.729638   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:23:58.729646   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:23:58.741378   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:23:58.741389   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:23:58.752758   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:23:58.752768   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:23:58.776882   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:23:58.776889   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:23:58.790129   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:23:58.790140   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:23:58.802877   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:23:58.802888   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:23:58.815148   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:23:58.815160   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:23:58.839432   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:23:58.839441   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:23:58.853121   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:23:58.853131   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:23:58.875093   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:23:58.875103   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:23:58.887226   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:23:58.887237   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:24:01.411155   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:24:06.413857   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:24:06.414042   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:24:06.439236   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:24:06.439350   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:24:06.456157   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:24:06.456231   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:24:06.469380   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:24:06.469454   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:24:06.481282   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:24:06.481349   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:24:06.498021   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:24:06.498084   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:24:06.508638   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:24:06.508707   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:24:06.518584   14738 logs.go:276] 0 containers: []
	W0819 11:24:06.518595   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:24:06.518648   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:24:06.528772   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:24:06.528791   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:24:06.528796   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:24:06.540326   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:24:06.540336   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:24:06.565261   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:24:06.565270   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:24:06.577049   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:24:06.577060   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:24:06.597327   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:24:06.597336   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:24:06.610602   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:24:06.610615   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:24:06.635959   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:24:06.635969   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:24:06.640279   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:24:06.640287   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:24:06.676337   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:24:06.676349   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:24:06.691077   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:24:06.691088   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:24:06.705023   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:24:06.705034   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:24:06.720079   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:24:06.720092   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:24:06.733812   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:24:06.733824   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:24:06.745934   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:24:06.745945   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:24:06.782006   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:24:06.782022   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:24:06.793052   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:24:06.793066   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:24:09.315836   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:24:14.318145   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:24:14.318313   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:24:14.333107   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:24:14.333184   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:24:14.344985   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:24:14.345052   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:24:14.355387   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:24:14.355453   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:24:14.365909   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:24:14.365975   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:24:14.375884   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:24:14.375945   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:24:14.386486   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:24:14.386547   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:24:14.396633   14738 logs.go:276] 0 containers: []
	W0819 11:24:14.396643   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:24:14.396695   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:24:14.409329   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:24:14.409346   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:24:14.409353   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:24:14.447147   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:24:14.447154   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:24:14.482437   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:24:14.482448   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:24:14.494110   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:24:14.494124   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:24:14.506563   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:24:14.506574   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:24:14.517944   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:24:14.517958   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:24:14.532133   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:24:14.532144   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:24:14.556430   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:24:14.556443   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:24:14.570534   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:24:14.570543   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:24:14.584763   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:24:14.584774   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:24:14.597113   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:24:14.597122   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:24:14.601164   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:24:14.601172   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:24:14.622449   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:24:14.622459   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:24:14.637468   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:24:14.637481   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:24:14.656197   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:24:14.656208   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:24:14.668120   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:24:14.668131   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:24:17.194282   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:24:22.195238   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0819 11:24:22.195338   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:24:22.207173   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:24:22.207245   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:24:22.218898   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:24:22.218962   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:24:22.229663   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:24:22.229733   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:24:22.240194   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:24:22.240258   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:24:22.250329   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:24:22.250393   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:24:22.261084   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:24:22.261150   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:24:22.271395   14738 logs.go:276] 0 containers: []
	W0819 11:24:22.271405   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:24:22.271463   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:24:22.281729   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:24:22.281749   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:24:22.281755   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:24:22.299287   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:24:22.299298   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:24:22.313187   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:24:22.313199   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:24:22.326077   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:24:22.326088   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:24:22.337665   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:24:22.337675   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:24:22.363194   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:24:22.363204   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:24:22.374743   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:24:22.374755   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:24:22.392263   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:24:22.392273   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:24:22.403763   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:24:22.403774   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:24:22.438057   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:24:22.438068   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:24:22.449172   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:24:22.449182   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:24:22.487128   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:24:22.487135   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:24:22.491607   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:24:22.491615   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:24:22.526411   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:24:22.526430   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:24:22.540775   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:24:22.540784   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:24:22.552871   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:24:22.552884   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:24:25.078547   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:24:30.080619   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:24:30.080778   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:24:30.097381   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:24:30.097470   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:24:30.112021   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:24:30.112092   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:24:30.123515   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:24:30.123585   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:24:30.134445   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:24:30.134514   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:24:30.146475   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:24:30.146536   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:24:30.156588   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:24:30.156650   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:24:30.167651   14738 logs.go:276] 0 containers: []
	W0819 11:24:30.167663   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:24:30.167718   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:24:30.182527   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:24:30.182546   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:24:30.182552   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:24:30.221669   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:24:30.221680   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:24:30.226618   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:24:30.226626   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:24:30.261787   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:24:30.261798   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:24:30.275662   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:24:30.275673   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:24:30.287679   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:24:30.287690   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:24:30.302329   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:24:30.302341   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:24:30.323046   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:24:30.323055   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:24:30.334935   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:24:30.334946   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:24:30.360247   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:24:30.360259   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:24:30.374855   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:24:30.374865   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:24:30.386347   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:24:30.386358   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:24:30.404033   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:24:30.404047   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:24:30.421602   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:24:30.421614   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:24:30.446416   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:24:30.446434   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:24:30.462146   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:24:30.462158   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:24:32.977219   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:24:37.979820   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:24:37.980061   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:24:38.001358   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:24:38.001460   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:24:38.021560   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:24:38.021636   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:24:38.034212   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:24:38.034275   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:24:38.045304   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:24:38.045374   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:24:38.055608   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:24:38.055674   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:24:38.066197   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:24:38.066268   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:24:38.076473   14738 logs.go:276] 0 containers: []
	W0819 11:24:38.076484   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:24:38.076543   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:24:38.086586   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:24:38.086604   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:24:38.086632   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:24:38.091059   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:24:38.091065   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:24:38.112614   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:24:38.112625   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:24:38.129703   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:24:38.129714   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:24:38.153887   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:24:38.153896   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:24:38.194704   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:24:38.194715   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:24:38.210363   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:24:38.210373   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:24:38.224726   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:24:38.224737   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:24:38.236391   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:24:38.236403   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:24:38.248870   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:24:38.248886   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:24:38.262402   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:24:38.262415   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:24:38.294624   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:24:38.294639   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:24:38.333073   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:24:38.333095   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:24:38.348372   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:24:38.348390   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:24:38.360608   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:24:38.360621   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:24:38.386760   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:24:38.386782   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:24:40.907420   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:24:45.909853   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:24:45.910175   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:24:45.939135   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:24:45.939274   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:24:45.957785   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:24:45.957863   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:24:45.971237   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:24:45.971311   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:24:45.983393   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:24:45.983466   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:24:45.994107   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:24:45.994180   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:24:46.005791   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:24:46.005858   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:24:46.016218   14738 logs.go:276] 0 containers: []
	W0819 11:24:46.016229   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:24:46.016285   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:24:46.027537   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:24:46.027555   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:24:46.027561   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:24:46.051625   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:24:46.051636   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:24:46.065366   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:24:46.065376   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:24:46.076904   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:24:46.076914   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:24:46.088458   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:24:46.088469   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:24:46.112139   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:24:46.112148   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:24:46.149096   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:24:46.149111   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:24:46.164606   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:24:46.164617   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:24:46.187849   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:24:46.187868   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:24:46.209245   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:24:46.209263   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:24:46.232819   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:24:46.232831   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:24:46.237493   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:24:46.237500   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:24:46.252926   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:24:46.252941   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:24:46.276220   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:24:46.276232   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:24:46.294566   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:24:46.294577   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:24:46.335804   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:24:46.335814   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:24:48.850333   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:24:53.852963   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:24:53.853246   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:24:53.884926   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:24:53.885050   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:24:53.903871   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:24:53.903982   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:24:53.918258   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:24:53.918326   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:24:53.940838   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:24:53.940905   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:24:53.951048   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:24:53.951107   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:24:53.961881   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:24:53.961948   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:24:53.971834   14738 logs.go:276] 0 containers: []
	W0819 11:24:53.971846   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:24:53.971900   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:24:53.982007   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:24:53.982021   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:24:53.982026   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:24:53.995058   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:24:53.995069   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:24:54.007722   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:24:54.007733   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:24:54.026463   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:24:54.026477   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:24:54.031414   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:24:54.031422   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:24:54.050789   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:24:54.050800   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:24:54.066607   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:24:54.066617   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:24:54.107212   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:24:54.107227   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:24:54.125405   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:24:54.125419   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:24:54.164467   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:24:54.164480   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:24:54.191070   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:24:54.191086   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:24:54.218583   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:24:54.218602   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:24:54.234308   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:24:54.234318   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:24:54.247892   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:24:54.247903   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:24:54.262965   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:24:54.262976   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:24:54.275833   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:24:54.275844   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:24:56.803970   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:25:01.806295   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:25:01.806770   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:25:01.850344   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:25:01.850498   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:25:01.875629   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:25:01.875719   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:25:01.889757   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:25:01.889836   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:25:01.902675   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:25:01.902751   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:25:01.913946   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:25:01.914018   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:25:01.925717   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:25:01.925788   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:25:01.937148   14738 logs.go:276] 0 containers: []
	W0819 11:25:01.937161   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:25:01.937219   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:25:01.949333   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:25:01.949353   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:25:01.949358   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:25:01.965036   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:25:01.965049   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:25:01.978964   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:25:01.978977   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:25:02.001583   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:25:02.001592   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:25:02.006506   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:25:02.006513   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:25:02.043059   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:25:02.043071   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:25:02.060264   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:25:02.060277   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:25:02.084706   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:25:02.084722   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:25:02.125749   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:25:02.125765   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:25:02.138449   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:25:02.138461   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:25:02.157740   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:25:02.157750   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:25:02.183866   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:25:02.183878   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:25:02.198075   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:25:02.198088   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:25:02.218156   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:25:02.218169   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:25:02.238215   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:25:02.238226   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:25:02.251494   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:25:02.251506   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:25:04.768254   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:25:09.770634   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:25:09.771019   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:25:09.804571   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:25:09.804690   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:25:09.824310   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:25:09.824399   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:25:09.839078   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:25:09.839155   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:25:09.852271   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:25:09.852352   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:25:09.866667   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:25:09.866739   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:25:09.879870   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:25:09.879940   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:25:09.891785   14738 logs.go:276] 0 containers: []
	W0819 11:25:09.891796   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:25:09.891852   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:25:09.903208   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:25:09.903226   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:25:09.903233   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:25:09.942722   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:25:09.942734   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:25:09.960976   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:25:09.960988   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:25:09.973619   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:25:09.973631   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:25:10.000199   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:25:10.000213   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:25:10.018880   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:25:10.018890   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:25:10.031336   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:25:10.031351   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:25:10.053447   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:25:10.053458   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:25:10.078504   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:25:10.078515   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:25:10.092074   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:25:10.092085   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:25:10.096502   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:25:10.096512   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:25:10.112043   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:25:10.112055   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:25:10.124266   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:25:10.124277   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:25:10.136696   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:25:10.136712   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:25:10.177702   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:25:10.177716   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:25:10.192914   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:25:10.192926   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:25:12.709887   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:25:17.711491   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:25:17.711688   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:25:17.730191   14738 logs.go:276] 2 containers: [12957a075e08 e664d2838747]
	I0819 11:25:17.730270   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:25:17.744492   14738 logs.go:276] 2 containers: [f23af0cbf69f 70ca7c1620fa]
	I0819 11:25:17.744569   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:25:17.760389   14738 logs.go:276] 1 containers: [66a92e434d75]
	I0819 11:25:17.760463   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:25:17.771699   14738 logs.go:276] 2 containers: [e7e94964c84b c9b1bc8e1717]
	I0819 11:25:17.771771   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:25:17.782772   14738 logs.go:276] 1 containers: [7da80d796c5e]
	I0819 11:25:17.782844   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:25:17.794062   14738 logs.go:276] 2 containers: [8a35fd21c049 cba74a0177d5]
	I0819 11:25:17.794130   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:25:17.805844   14738 logs.go:276] 0 containers: []
	W0819 11:25:17.805857   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:25:17.805917   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:25:17.817841   14738 logs.go:276] 1 containers: [626478da71fb]
	I0819 11:25:17.817860   14738 logs.go:123] Gathering logs for etcd [70ca7c1620fa] ...
	I0819 11:25:17.817865   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70ca7c1620fa"
	I0819 11:25:17.833531   14738 logs.go:123] Gathering logs for kube-proxy [7da80d796c5e] ...
	I0819 11:25:17.833547   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7da80d796c5e"
	I0819 11:25:17.846274   14738 logs.go:123] Gathering logs for kube-controller-manager [cba74a0177d5] ...
	I0819 11:25:17.846283   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba74a0177d5"
	I0819 11:25:17.860160   14738 logs.go:123] Gathering logs for storage-provisioner [626478da71fb] ...
	I0819 11:25:17.860174   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 626478da71fb"
	I0819 11:25:17.882679   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:25:17.882691   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:25:17.922333   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:25:17.922355   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:25:17.927015   14738 logs.go:123] Gathering logs for kube-scheduler [e7e94964c84b] ...
	I0819 11:25:17.927023   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e94964c84b"
	I0819 11:25:17.939258   14738 logs.go:123] Gathering logs for kube-scheduler [c9b1bc8e1717] ...
	I0819 11:25:17.939268   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b1bc8e1717"
	I0819 11:25:17.961095   14738 logs.go:123] Gathering logs for kube-controller-manager [8a35fd21c049] ...
	I0819 11:25:17.961108   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a35fd21c049"
	I0819 11:25:17.979988   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:25:17.979998   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:25:17.992314   14738 logs.go:123] Gathering logs for kube-apiserver [12957a075e08] ...
	I0819 11:25:17.992325   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12957a075e08"
	I0819 11:25:18.006846   14738 logs.go:123] Gathering logs for kube-apiserver [e664d2838747] ...
	I0819 11:25:18.006855   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e664d2838747"
	I0819 11:25:18.041421   14738 logs.go:123] Gathering logs for etcd [f23af0cbf69f] ...
	I0819 11:25:18.041442   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23af0cbf69f"
	I0819 11:25:18.056563   14738 logs.go:123] Gathering logs for coredns [66a92e434d75] ...
	I0819 11:25:18.056579   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a92e434d75"
	I0819 11:25:18.068430   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:25:18.068442   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:25:18.092627   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:25:18.092641   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:25:20.632146   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:25:25.633829   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:25:25.633883   14738 kubeadm.go:597] duration metric: took 4m4.006733208s to restartPrimaryControlPlane
	W0819 11:25:25.633928   14738 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 11:25:25.633952   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0819 11:25:26.654970   14738 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.02100825s)
	I0819 11:25:26.655027   14738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:25:26.659936   14738 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 11:25:26.663007   14738 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 11:25:26.665642   14738 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 11:25:26.665647   14738 kubeadm.go:157] found existing configuration files:
	
	I0819 11:25:26.665671   14738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52396 /etc/kubernetes/admin.conf
	I0819 11:25:26.668189   14738 kubeadm.go:163] "https://control-plane.minikube.internal:52396" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52396 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 11:25:26.668209   14738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 11:25:26.670824   14738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52396 /etc/kubernetes/kubelet.conf
	I0819 11:25:26.673466   14738 kubeadm.go:163] "https://control-plane.minikube.internal:52396" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52396 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 11:25:26.673489   14738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 11:25:26.676611   14738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52396 /etc/kubernetes/controller-manager.conf
	I0819 11:25:26.679457   14738 kubeadm.go:163] "https://control-plane.minikube.internal:52396" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52396 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 11:25:26.679483   14738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 11:25:26.682092   14738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52396 /etc/kubernetes/scheduler.conf
	I0819 11:25:26.684927   14738 kubeadm.go:163] "https://control-plane.minikube.internal:52396" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52396 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 11:25:26.684949   14738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 11:25:26.688219   14738 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 11:25:26.704506   14738 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0819 11:25:26.704536   14738 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 11:25:26.753619   14738 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 11:25:26.753672   14738 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 11:25:26.753721   14738 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 11:25:26.802472   14738 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 11:25:26.806723   14738 out.go:235]   - Generating certificates and keys ...
	I0819 11:25:26.806757   14738 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 11:25:26.806796   14738 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 11:25:26.806848   14738 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 11:25:26.806885   14738 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 11:25:26.806924   14738 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 11:25:26.806952   14738 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 11:25:26.806989   14738 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 11:25:26.807030   14738 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 11:25:26.807072   14738 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 11:25:26.807117   14738 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 11:25:26.807149   14738 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 11:25:26.807189   14738 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 11:25:27.098083   14738 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 11:25:27.226234   14738 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 11:25:27.349101   14738 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 11:25:27.627697   14738 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 11:25:27.657721   14738 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 11:25:27.658092   14738 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 11:25:27.658533   14738 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 11:25:27.725528   14738 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 11:25:27.729786   14738 out.go:235]   - Booting up control plane ...
	I0819 11:25:27.729946   14738 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 11:25:27.730040   14738 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 11:25:27.730086   14738 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 11:25:27.730166   14738 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 11:25:27.730316   14738 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 11:25:32.231596   14738 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501672 seconds
	I0819 11:25:32.231674   14738 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 11:25:32.235210   14738 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 11:25:32.749852   14738 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 11:25:32.750098   14738 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-163000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 11:25:33.255535   14738 kubeadm.go:310] [bootstrap-token] Using token: jtd2ut.wv7l8fjgzdqcwvda
	I0819 11:25:33.258767   14738 out.go:235]   - Configuring RBAC rules ...
	I0819 11:25:33.258820   14738 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 11:25:33.258865   14738 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 11:25:33.263932   14738 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 11:25:33.265012   14738 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 11:25:33.266084   14738 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 11:25:33.267032   14738 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 11:25:33.270336   14738 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 11:25:33.430195   14738 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 11:25:33.660055   14738 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 11:25:33.660774   14738 kubeadm.go:310] 
	I0819 11:25:33.660868   14738 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 11:25:33.660880   14738 kubeadm.go:310] 
	I0819 11:25:33.661033   14738 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 11:25:33.661041   14738 kubeadm.go:310] 
	I0819 11:25:33.661054   14738 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 11:25:33.661087   14738 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 11:25:33.661115   14738 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 11:25:33.661120   14738 kubeadm.go:310] 
	I0819 11:25:33.661147   14738 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 11:25:33.661152   14738 kubeadm.go:310] 
	I0819 11:25:33.661188   14738 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 11:25:33.661195   14738 kubeadm.go:310] 
	I0819 11:25:33.661217   14738 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 11:25:33.661261   14738 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 11:25:33.661296   14738 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 11:25:33.661299   14738 kubeadm.go:310] 
	I0819 11:25:33.661339   14738 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 11:25:33.661378   14738 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 11:25:33.661385   14738 kubeadm.go:310] 
	I0819 11:25:33.661425   14738 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jtd2ut.wv7l8fjgzdqcwvda \
	I0819 11:25:33.661531   14738 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3f715a0124d50cfae4e4dfc474638f45f1ddd0476a0318801e6849c5425b2951 \
	I0819 11:25:33.661549   14738 kubeadm.go:310] 	--control-plane 
	I0819 11:25:33.661556   14738 kubeadm.go:310] 
	I0819 11:25:33.661623   14738 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 11:25:33.661631   14738 kubeadm.go:310] 
	I0819 11:25:33.661674   14738 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jtd2ut.wv7l8fjgzdqcwvda \
	I0819 11:25:33.661730   14738 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3f715a0124d50cfae4e4dfc474638f45f1ddd0476a0318801e6849c5425b2951 
	I0819 11:25:33.661791   14738 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 11:25:33.661798   14738 cni.go:84] Creating CNI manager for ""
	I0819 11:25:33.661805   14738 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:25:33.665654   14738 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 11:25:33.673645   14738 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 11:25:33.677046   14738 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 11:25:33.682528   14738 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 11:25:33.682603   14738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:25:33.682636   14738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-163000 minikube.k8s.io/updated_at=2024_08_19T11_25_33_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411 minikube.k8s.io/name=stopped-upgrade-163000 minikube.k8s.io/primary=true
	I0819 11:25:33.719300   14738 ops.go:34] apiserver oom_adj: -16
	I0819 11:25:33.719424   14738 kubeadm.go:1113] duration metric: took 36.892959ms to wait for elevateKubeSystemPrivileges
	I0819 11:25:33.747579   14738 kubeadm.go:394] duration metric: took 4m12.137773333s to StartCluster
	I0819 11:25:33.747600   14738 settings.go:142] acquiring lock: {Name:mk15c923e9a2cce6164c6c5cc70f47fd16c4c208 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:25:33.747691   14738 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:25:33.748117   14738 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19468-11838/kubeconfig: {Name:mkf06e67426049c2259f6e46b5143872117d8aeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:25:33.748429   14738 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:25:33.748462   14738 config.go:182] Loaded profile config "stopped-upgrade-163000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 11:25:33.748444   14738 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 11:25:33.748665   14738 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-163000"
	I0819 11:25:33.748673   14738 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-163000"
	I0819 11:25:33.748696   14738 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-163000"
	I0819 11:25:33.748700   14738 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-163000"
	W0819 11:25:33.748708   14738 addons.go:243] addon storage-provisioner should already be in state true
	I0819 11:25:33.748731   14738 host.go:66] Checking if "stopped-upgrade-163000" exists ...
	I0819 11:25:33.750932   14738 kapi.go:59] client config for stopped-upgrade-163000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/stopped-upgrade-163000/client.key", CAFile:"/Users/jenkins/minikube-integration/19468-11838/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106043d10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 11:25:33.751122   14738 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-163000"
	W0819 11:25:33.751132   14738 addons.go:243] addon default-storageclass should already be in state true
	I0819 11:25:33.751147   14738 host.go:66] Checking if "stopped-upgrade-163000" exists ...
	I0819 11:25:33.753592   14738 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 11:25:33.753614   14738 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 11:25:33.753629   14738 sshutil.go:53] new ssh client: &{IP:localhost Port:52361 SSHKeyPath:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/stopped-upgrade-163000/id_rsa Username:docker}
	I0819 11:25:33.756669   14738 out.go:177] * Verifying Kubernetes components...
	I0819 11:25:33.760643   14738 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:25:33.764725   14738 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:25:33.767854   14738 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 11:25:33.767900   14738 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 11:25:33.767921   14738 sshutil.go:53] new ssh client: &{IP:localhost Port:52361 SSHKeyPath:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/stopped-upgrade-163000/id_rsa Username:docker}
	I0819 11:25:33.841595   14738 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 11:25:33.848717   14738 api_server.go:52] waiting for apiserver process to appear ...
	I0819 11:25:33.848780   14738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:25:33.852205   14738 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 11:25:33.853438   14738 api_server.go:72] duration metric: took 104.970958ms to wait for apiserver process to appear ...
	I0819 11:25:33.853447   14738 api_server.go:88] waiting for apiserver healthz status ...
	I0819 11:25:33.853455   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:25:33.896346   14738 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 11:25:34.230990   14738 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 11:25:34.231002   14738 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 11:25:38.854000   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:25:38.854034   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:25:43.854566   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:25:43.854590   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:25:48.855473   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:25:48.855501   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:25:53.856110   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:25:53.856139   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:25:58.856529   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:25:58.856567   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:26:03.857341   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:26:03.857363   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0819 11:26:04.233095   14738 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0819 11:26:04.241288   14738 out.go:177] * Enabled addons: storage-provisioner
	I0819 11:26:04.247195   14738 addons.go:510] duration metric: took 30.498924s for enable addons: enabled=[storage-provisioner]
	I0819 11:26:08.858047   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:26:08.858095   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:26:13.859011   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:26:13.859037   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:26:18.860211   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:26:18.860242   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:26:23.861726   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:26:23.861747   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:26:28.863583   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:26:28.863623   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:26:33.865229   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:26:33.865381   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:26:33.876651   14738 logs.go:276] 1 containers: [16f2b86c071c]
	I0819 11:26:33.876723   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:26:33.887664   14738 logs.go:276] 1 containers: [157fedf83b9a]
	I0819 11:26:33.887737   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:26:33.898250   14738 logs.go:276] 2 containers: [cfef7301ce2a ef31fb8f1aa5]
	I0819 11:26:33.898326   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:26:33.908733   14738 logs.go:276] 1 containers: [a33e6296238d]
	I0819 11:26:33.908801   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:26:33.919415   14738 logs.go:276] 1 containers: [3977524905de]
	I0819 11:26:33.919480   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:26:33.930230   14738 logs.go:276] 1 containers: [858b4500d180]
	I0819 11:26:33.930299   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:26:33.940304   14738 logs.go:276] 0 containers: []
	W0819 11:26:33.940315   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:26:33.940369   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:26:33.950323   14738 logs.go:276] 1 containers: [9f7b4fa8a75c]
	I0819 11:26:33.950339   14738 logs.go:123] Gathering logs for kube-proxy [3977524905de] ...
	I0819 11:26:33.950344   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3977524905de"
	I0819 11:26:33.962015   14738 logs.go:123] Gathering logs for storage-provisioner [9f7b4fa8a75c] ...
	I0819 11:26:33.962028   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7b4fa8a75c"
	I0819 11:26:33.973701   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:26:33.973711   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:26:33.997354   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:26:33.997364   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:26:34.009061   14738 logs.go:123] Gathering logs for etcd [157fedf83b9a] ...
	I0819 11:26:34.009074   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157fedf83b9a"
	I0819 11:26:34.023181   14738 logs.go:123] Gathering logs for coredns [ef31fb8f1aa5] ...
	I0819 11:26:34.023193   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef31fb8f1aa5"
	I0819 11:26:34.039817   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:26:34.039827   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:26:34.077763   14738 logs.go:123] Gathering logs for kube-apiserver [16f2b86c071c] ...
	I0819 11:26:34.077776   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16f2b86c071c"
	I0819 11:26:34.092012   14738 logs.go:123] Gathering logs for coredns [cfef7301ce2a] ...
	I0819 11:26:34.092024   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfef7301ce2a"
	I0819 11:26:34.103440   14738 logs.go:123] Gathering logs for kube-scheduler [a33e6296238d] ...
	I0819 11:26:34.103454   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a33e6296238d"
	I0819 11:26:34.118761   14738 logs.go:123] Gathering logs for kube-controller-manager [858b4500d180] ...
	I0819 11:26:34.118771   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 858b4500d180"
	I0819 11:26:34.136820   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:26:34.136831   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:26:34.170303   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:26:34.170310   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:26:36.677094   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:26:41.679545   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:26:41.679919   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:26:41.714067   14738 logs.go:276] 1 containers: [16f2b86c071c]
	I0819 11:26:41.714189   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:26:41.732540   14738 logs.go:276] 1 containers: [157fedf83b9a]
	I0819 11:26:41.732627   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:26:41.746386   14738 logs.go:276] 2 containers: [cfef7301ce2a ef31fb8f1aa5]
	I0819 11:26:41.746451   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:26:41.758473   14738 logs.go:276] 1 containers: [a33e6296238d]
	I0819 11:26:41.758543   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:26:41.769637   14738 logs.go:276] 1 containers: [3977524905de]
	I0819 11:26:41.769710   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:26:41.780873   14738 logs.go:276] 1 containers: [858b4500d180]
	I0819 11:26:41.780941   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:26:41.791109   14738 logs.go:276] 0 containers: []
	W0819 11:26:41.791121   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:26:41.791177   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:26:41.801619   14738 logs.go:276] 1 containers: [9f7b4fa8a75c]
	I0819 11:26:41.801637   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:26:41.801646   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:26:41.837872   14738 logs.go:123] Gathering logs for coredns [cfef7301ce2a] ...
	I0819 11:26:41.837883   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfef7301ce2a"
	I0819 11:26:41.850930   14738 logs.go:123] Gathering logs for coredns [ef31fb8f1aa5] ...
	I0819 11:26:41.850942   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef31fb8f1aa5"
	I0819 11:26:41.862962   14738 logs.go:123] Gathering logs for kube-scheduler [a33e6296238d] ...
	I0819 11:26:41.862973   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a33e6296238d"
	I0819 11:26:41.877922   14738 logs.go:123] Gathering logs for storage-provisioner [9f7b4fa8a75c] ...
	I0819 11:26:41.877936   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7b4fa8a75c"
	I0819 11:26:41.889719   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:26:41.889730   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:26:41.901752   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:26:41.901766   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:26:41.935676   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:26:41.935685   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:26:41.939745   14738 logs.go:123] Gathering logs for kube-apiserver [16f2b86c071c] ...
	I0819 11:26:41.939755   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16f2b86c071c"
	I0819 11:26:41.953656   14738 logs.go:123] Gathering logs for etcd [157fedf83b9a] ...
	I0819 11:26:41.953669   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157fedf83b9a"
	I0819 11:26:41.967777   14738 logs.go:123] Gathering logs for kube-proxy [3977524905de] ...
	I0819 11:26:41.967789   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3977524905de"
	I0819 11:26:41.981088   14738 logs.go:123] Gathering logs for kube-controller-manager [858b4500d180] ...
	I0819 11:26:41.981101   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 858b4500d180"
	I0819 11:26:41.999303   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:26:41.999315   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:26:44.524600   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:26:49.527406   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:26:49.527796   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:26:49.562588   14738 logs.go:276] 1 containers: [16f2b86c071c]
	I0819 11:26:49.562698   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:26:49.582593   14738 logs.go:276] 1 containers: [157fedf83b9a]
	I0819 11:26:49.582706   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:26:49.597632   14738 logs.go:276] 2 containers: [cfef7301ce2a ef31fb8f1aa5]
	I0819 11:26:49.597695   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:26:49.609409   14738 logs.go:276] 1 containers: [a33e6296238d]
	I0819 11:26:49.609470   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:26:49.619476   14738 logs.go:276] 1 containers: [3977524905de]
	I0819 11:26:49.619543   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:26:49.631011   14738 logs.go:276] 1 containers: [858b4500d180]
	I0819 11:26:49.631079   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:26:49.641586   14738 logs.go:276] 0 containers: []
	W0819 11:26:49.641597   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:26:49.641654   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:26:49.652194   14738 logs.go:276] 1 containers: [9f7b4fa8a75c]
	I0819 11:26:49.652209   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:26:49.652215   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:26:49.684710   14738 logs.go:123] Gathering logs for coredns [cfef7301ce2a] ...
	I0819 11:26:49.684717   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfef7301ce2a"
	I0819 11:26:49.702289   14738 logs.go:123] Gathering logs for kube-apiserver [16f2b86c071c] ...
	I0819 11:26:49.702301   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16f2b86c071c"
	I0819 11:26:49.716145   14738 logs.go:123] Gathering logs for etcd [157fedf83b9a] ...
	I0819 11:26:49.716155   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157fedf83b9a"
	I0819 11:26:49.730075   14738 logs.go:123] Gathering logs for coredns [ef31fb8f1aa5] ...
	I0819 11:26:49.730088   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef31fb8f1aa5"
	I0819 11:26:49.740911   14738 logs.go:123] Gathering logs for kube-scheduler [a33e6296238d] ...
	I0819 11:26:49.740920   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a33e6296238d"
	I0819 11:26:49.756025   14738 logs.go:123] Gathering logs for kube-proxy [3977524905de] ...
	I0819 11:26:49.756036   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3977524905de"
	I0819 11:26:49.769051   14738 logs.go:123] Gathering logs for kube-controller-manager [858b4500d180] ...
	I0819 11:26:49.769064   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 858b4500d180"
	I0819 11:26:49.789992   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:26:49.790005   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:26:49.794319   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:26:49.794326   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:26:49.832703   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:26:49.832718   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:26:49.843731   14738 logs.go:123] Gathering logs for storage-provisioner [9f7b4fa8a75c] ...
	I0819 11:26:49.843743   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7b4fa8a75c"
	I0819 11:26:49.860004   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:26:49.860019   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:26:52.387240   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:26:57.389408   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:26:57.389813   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:26:57.425400   14738 logs.go:276] 1 containers: [16f2b86c071c]
	I0819 11:26:57.425502   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:26:57.444839   14738 logs.go:276] 1 containers: [157fedf83b9a]
	I0819 11:26:57.444907   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:26:57.461176   14738 logs.go:276] 2 containers: [cfef7301ce2a ef31fb8f1aa5]
	I0819 11:26:57.461233   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:26:57.473959   14738 logs.go:276] 1 containers: [a33e6296238d]
	I0819 11:26:57.474019   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:26:57.485681   14738 logs.go:276] 1 containers: [3977524905de]
	I0819 11:26:57.485747   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:26:57.498577   14738 logs.go:276] 1 containers: [858b4500d180]
	I0819 11:26:57.498632   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:26:57.511771   14738 logs.go:276] 0 containers: []
	W0819 11:26:57.511782   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:26:57.511823   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:26:57.524046   14738 logs.go:276] 1 containers: [9f7b4fa8a75c]
	I0819 11:26:57.524058   14738 logs.go:123] Gathering logs for kube-controller-manager [858b4500d180] ...
	I0819 11:26:57.524063   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 858b4500d180"
	I0819 11:26:57.541696   14738 logs.go:123] Gathering logs for storage-provisioner [9f7b4fa8a75c] ...
	I0819 11:26:57.541707   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7b4fa8a75c"
	I0819 11:26:57.553567   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:26:57.553575   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:26:57.587638   14738 logs.go:123] Gathering logs for kube-apiserver [16f2b86c071c] ...
	I0819 11:26:57.587648   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16f2b86c071c"
	I0819 11:26:57.602876   14738 logs.go:123] Gathering logs for etcd [157fedf83b9a] ...
	I0819 11:26:57.602888   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157fedf83b9a"
	I0819 11:26:57.616928   14738 logs.go:123] Gathering logs for coredns [cfef7301ce2a] ...
	I0819 11:26:57.616937   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfef7301ce2a"
	I0819 11:26:57.628446   14738 logs.go:123] Gathering logs for kube-proxy [3977524905de] ...
	I0819 11:26:57.628455   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3977524905de"
	I0819 11:26:57.640168   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:26:57.640182   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:26:57.653759   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:26:57.653772   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:26:57.658177   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:26:57.658183   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:26:57.693166   14738 logs.go:123] Gathering logs for coredns [ef31fb8f1aa5] ...
	I0819 11:26:57.693182   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef31fb8f1aa5"
	I0819 11:26:57.712941   14738 logs.go:123] Gathering logs for kube-scheduler [a33e6296238d] ...
	I0819 11:26:57.712951   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a33e6296238d"
	I0819 11:26:57.727498   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:26:57.727508   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:27:00.253071   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:27:05.255619   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:27:05.256012   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:27:05.297481   14738 logs.go:276] 1 containers: [16f2b86c071c]
	I0819 11:27:05.297595   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:27:05.316688   14738 logs.go:276] 1 containers: [157fedf83b9a]
	I0819 11:27:05.316773   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:27:05.331119   14738 logs.go:276] 2 containers: [cfef7301ce2a ef31fb8f1aa5]
	I0819 11:27:05.331194   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:27:05.343253   14738 logs.go:276] 1 containers: [a33e6296238d]
	I0819 11:27:05.343321   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:27:05.354334   14738 logs.go:276] 1 containers: [3977524905de]
	I0819 11:27:05.354397   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:27:05.365029   14738 logs.go:276] 1 containers: [858b4500d180]
	I0819 11:27:05.365093   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:27:05.375402   14738 logs.go:276] 0 containers: []
	W0819 11:27:05.375418   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:27:05.375477   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:27:05.391364   14738 logs.go:276] 1 containers: [9f7b4fa8a75c]
	I0819 11:27:05.391386   14738 logs.go:123] Gathering logs for etcd [157fedf83b9a] ...
	I0819 11:27:05.391392   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157fedf83b9a"
	I0819 11:27:05.405099   14738 logs.go:123] Gathering logs for coredns [ef31fb8f1aa5] ...
	I0819 11:27:05.405114   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef31fb8f1aa5"
	I0819 11:27:05.416303   14738 logs.go:123] Gathering logs for kube-proxy [3977524905de] ...
	I0819 11:27:05.416313   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3977524905de"
	I0819 11:27:05.427785   14738 logs.go:123] Gathering logs for kube-controller-manager [858b4500d180] ...
	I0819 11:27:05.427797   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 858b4500d180"
	I0819 11:27:05.446282   14738 logs.go:123] Gathering logs for storage-provisioner [9f7b4fa8a75c] ...
	I0819 11:27:05.446297   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7b4fa8a75c"
	I0819 11:27:05.458059   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:27:05.458074   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:27:05.493315   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:27:05.493331   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:27:05.529408   14738 logs.go:123] Gathering logs for kube-apiserver [16f2b86c071c] ...
	I0819 11:27:05.529418   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16f2b86c071c"
	I0819 11:27:05.547140   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:27:05.547153   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:27:05.559941   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:27:05.559952   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:27:05.585821   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:27:05.585831   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:27:05.589792   14738 logs.go:123] Gathering logs for coredns [cfef7301ce2a] ...
	I0819 11:27:05.589802   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfef7301ce2a"
	I0819 11:27:05.602163   14738 logs.go:123] Gathering logs for kube-scheduler [a33e6296238d] ...
	I0819 11:27:05.602177   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a33e6296238d"
	I0819 11:27:08.118021   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:27:13.121233   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:27:13.121713   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:27:13.163928   14738 logs.go:276] 1 containers: [16f2b86c071c]
	I0819 11:27:13.164061   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:27:13.184970   14738 logs.go:276] 1 containers: [157fedf83b9a]
	I0819 11:27:13.185083   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:27:13.199436   14738 logs.go:276] 2 containers: [cfef7301ce2a ef31fb8f1aa5]
	I0819 11:27:13.199506   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:27:13.211912   14738 logs.go:276] 1 containers: [a33e6296238d]
	I0819 11:27:13.211978   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:27:13.222419   14738 logs.go:276] 1 containers: [3977524905de]
	I0819 11:27:13.222489   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:27:13.232843   14738 logs.go:276] 1 containers: [858b4500d180]
	I0819 11:27:13.232909   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:27:13.243417   14738 logs.go:276] 0 containers: []
	W0819 11:27:13.243427   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:27:13.243472   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:27:13.254200   14738 logs.go:276] 1 containers: [9f7b4fa8a75c]
	I0819 11:27:13.254214   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:27:13.254219   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:27:13.258506   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:27:13.258516   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:27:13.295111   14738 logs.go:123] Gathering logs for kube-apiserver [16f2b86c071c] ...
	I0819 11:27:13.295124   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16f2b86c071c"
	I0819 11:27:13.309907   14738 logs.go:123] Gathering logs for etcd [157fedf83b9a] ...
	I0819 11:27:13.309918   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157fedf83b9a"
	I0819 11:27:13.324134   14738 logs.go:123] Gathering logs for coredns [ef31fb8f1aa5] ...
	I0819 11:27:13.324144   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef31fb8f1aa5"
	I0819 11:27:13.336113   14738 logs.go:123] Gathering logs for kube-controller-manager [858b4500d180] ...
	I0819 11:27:13.336122   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 858b4500d180"
	I0819 11:27:13.354088   14738 logs.go:123] Gathering logs for storage-provisioner [9f7b4fa8a75c] ...
	I0819 11:27:13.354098   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7b4fa8a75c"
	I0819 11:27:13.365631   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:27:13.365642   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:27:13.399671   14738 logs.go:123] Gathering logs for kube-scheduler [a33e6296238d] ...
	I0819 11:27:13.399680   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a33e6296238d"
	I0819 11:27:13.415010   14738 logs.go:123] Gathering logs for kube-proxy [3977524905de] ...
	I0819 11:27:13.415022   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3977524905de"
	I0819 11:27:13.426762   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:27:13.426773   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:27:13.450623   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:27:13.450631   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:27:13.462223   14738 logs.go:123] Gathering logs for coredns [cfef7301ce2a] ...
	I0819 11:27:13.462232   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfef7301ce2a"
	I0819 11:27:15.976835   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:27:20.979748   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:27:20.980252   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:27:21.020912   14738 logs.go:276] 1 containers: [16f2b86c071c]
	I0819 11:27:21.021039   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:27:21.048370   14738 logs.go:276] 1 containers: [157fedf83b9a]
	I0819 11:27:21.048454   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:27:21.063292   14738 logs.go:276] 2 containers: [cfef7301ce2a ef31fb8f1aa5]
	I0819 11:27:21.063366   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:27:21.075212   14738 logs.go:276] 1 containers: [a33e6296238d]
	I0819 11:27:21.075273   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:27:21.086178   14738 logs.go:276] 1 containers: [3977524905de]
	I0819 11:27:21.086244   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:27:21.097401   14738 logs.go:276] 1 containers: [858b4500d180]
	I0819 11:27:21.097478   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:27:21.108329   14738 logs.go:276] 0 containers: []
	W0819 11:27:21.108337   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:27:21.108388   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:27:21.119799   14738 logs.go:276] 1 containers: [9f7b4fa8a75c]
	I0819 11:27:21.119817   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:27:21.119822   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:27:21.152633   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:27:21.152640   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:27:21.156785   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:27:21.156791   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:27:21.192025   14738 logs.go:123] Gathering logs for kube-apiserver [16f2b86c071c] ...
	I0819 11:27:21.192037   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16f2b86c071c"
	I0819 11:27:21.206855   14738 logs.go:123] Gathering logs for etcd [157fedf83b9a] ...
	I0819 11:27:21.206868   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157fedf83b9a"
	I0819 11:27:21.221803   14738 logs.go:123] Gathering logs for coredns [cfef7301ce2a] ...
	I0819 11:27:21.221815   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfef7301ce2a"
	I0819 11:27:21.234290   14738 logs.go:123] Gathering logs for coredns [ef31fb8f1aa5] ...
	I0819 11:27:21.234303   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef31fb8f1aa5"
	I0819 11:27:21.246059   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:27:21.246071   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:27:21.269712   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:27:21.269721   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:27:21.288726   14738 logs.go:123] Gathering logs for kube-scheduler [a33e6296238d] ...
	I0819 11:27:21.288737   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a33e6296238d"
	I0819 11:27:21.307890   14738 logs.go:123] Gathering logs for kube-proxy [3977524905de] ...
	I0819 11:27:21.307903   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3977524905de"
	I0819 11:27:21.320111   14738 logs.go:123] Gathering logs for kube-controller-manager [858b4500d180] ...
	I0819 11:27:21.320123   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 858b4500d180"
	I0819 11:27:21.338111   14738 logs.go:123] Gathering logs for storage-provisioner [9f7b4fa8a75c] ...
	I0819 11:27:21.338120   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7b4fa8a75c"
	I0819 11:27:23.864543   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:27:28.867258   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:27:28.867530   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:27:28.902106   14738 logs.go:276] 1 containers: [16f2b86c071c]
	I0819 11:27:28.902236   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:27:28.919805   14738 logs.go:276] 1 containers: [157fedf83b9a]
	I0819 11:27:28.919882   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:27:28.933209   14738 logs.go:276] 2 containers: [cfef7301ce2a ef31fb8f1aa5]
	I0819 11:27:28.933281   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:27:28.944774   14738 logs.go:276] 1 containers: [a33e6296238d]
	I0819 11:27:28.944841   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:27:28.955883   14738 logs.go:276] 1 containers: [3977524905de]
	I0819 11:27:28.955949   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:27:28.971677   14738 logs.go:276] 1 containers: [858b4500d180]
	I0819 11:27:28.971742   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:27:28.982107   14738 logs.go:276] 0 containers: []
	W0819 11:27:28.982119   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:27:28.982171   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:27:28.992547   14738 logs.go:276] 1 containers: [9f7b4fa8a75c]
	I0819 11:27:28.992562   14738 logs.go:123] Gathering logs for kube-apiserver [16f2b86c071c] ...
	I0819 11:27:28.992567   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16f2b86c071c"
	I0819 11:27:29.007112   14738 logs.go:123] Gathering logs for etcd [157fedf83b9a] ...
	I0819 11:27:29.007125   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157fedf83b9a"
	I0819 11:27:29.021364   14738 logs.go:123] Gathering logs for coredns [cfef7301ce2a] ...
	I0819 11:27:29.021376   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfef7301ce2a"
	I0819 11:27:29.040164   14738 logs.go:123] Gathering logs for coredns [ef31fb8f1aa5] ...
	I0819 11:27:29.040175   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef31fb8f1aa5"
	I0819 11:27:29.052391   14738 logs.go:123] Gathering logs for kube-proxy [3977524905de] ...
	I0819 11:27:29.052401   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3977524905de"
	I0819 11:27:29.064515   14738 logs.go:123] Gathering logs for kube-controller-manager [858b4500d180] ...
	I0819 11:27:29.064528   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 858b4500d180"
	I0819 11:27:29.081894   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:27:29.081906   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:27:29.114746   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:27:29.114753   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:27:29.118871   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:27:29.118880   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:27:29.130430   14738 logs.go:123] Gathering logs for storage-provisioner [9f7b4fa8a75c] ...
	I0819 11:27:29.130441   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7b4fa8a75c"
	I0819 11:27:29.142480   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:27:29.142493   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:27:29.166490   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:27:29.166497   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:27:29.203731   14738 logs.go:123] Gathering logs for kube-scheduler [a33e6296238d] ...
	I0819 11:27:29.203744   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a33e6296238d"
	I0819 11:27:31.727333   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:27:36.730112   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:27:36.730549   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:27:36.769915   14738 logs.go:276] 1 containers: [16f2b86c071c]
	I0819 11:27:36.770051   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:27:36.791681   14738 logs.go:276] 1 containers: [157fedf83b9a]
	I0819 11:27:36.791803   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:27:36.807746   14738 logs.go:276] 2 containers: [cfef7301ce2a ef31fb8f1aa5]
	I0819 11:27:36.807819   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:27:36.820112   14738 logs.go:276] 1 containers: [a33e6296238d]
	I0819 11:27:36.820184   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:27:36.831293   14738 logs.go:276] 1 containers: [3977524905de]
	I0819 11:27:36.831363   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:27:36.842120   14738 logs.go:276] 1 containers: [858b4500d180]
	I0819 11:27:36.842194   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:27:36.852913   14738 logs.go:276] 0 containers: []
	W0819 11:27:36.852924   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:27:36.852987   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:27:36.863858   14738 logs.go:276] 1 containers: [9f7b4fa8a75c]
	I0819 11:27:36.863874   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:27:36.863879   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:27:36.898215   14738 logs.go:123] Gathering logs for kube-apiserver [16f2b86c071c] ...
	I0819 11:27:36.898223   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16f2b86c071c"
	I0819 11:27:36.913046   14738 logs.go:123] Gathering logs for coredns [cfef7301ce2a] ...
	I0819 11:27:36.913057   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfef7301ce2a"
	I0819 11:27:36.924737   14738 logs.go:123] Gathering logs for kube-scheduler [a33e6296238d] ...
	I0819 11:27:36.924747   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a33e6296238d"
	I0819 11:27:36.939595   14738 logs.go:123] Gathering logs for kube-proxy [3977524905de] ...
	I0819 11:27:36.939604   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3977524905de"
	I0819 11:27:36.951424   14738 logs.go:123] Gathering logs for kube-controller-manager [858b4500d180] ...
	I0819 11:27:36.951433   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 858b4500d180"
	I0819 11:27:36.969253   14738 logs.go:123] Gathering logs for storage-provisioner [9f7b4fa8a75c] ...
	I0819 11:27:36.969262   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7b4fa8a75c"
	I0819 11:27:36.981555   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:27:36.981566   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:27:36.993815   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:27:36.993824   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:27:36.998213   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:27:36.998220   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:27:37.034172   14738 logs.go:123] Gathering logs for etcd [157fedf83b9a] ...
	I0819 11:27:37.034183   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157fedf83b9a"
	I0819 11:27:37.051587   14738 logs.go:123] Gathering logs for coredns [ef31fb8f1aa5] ...
	I0819 11:27:37.051597   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef31fb8f1aa5"
	I0819 11:27:37.068546   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:27:37.068559   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:27:39.594187   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:27:44.596760   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:27:44.597124   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:27:44.632456   14738 logs.go:276] 1 containers: [16f2b86c071c]
	I0819 11:27:44.632571   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:27:44.653664   14738 logs.go:276] 1 containers: [157fedf83b9a]
	I0819 11:27:44.653756   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:27:44.679888   14738 logs.go:276] 2 containers: [cfef7301ce2a ef31fb8f1aa5]
	I0819 11:27:44.679979   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:27:44.694404   14738 logs.go:276] 1 containers: [a33e6296238d]
	I0819 11:27:44.694473   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:27:44.712482   14738 logs.go:276] 1 containers: [3977524905de]
	I0819 11:27:44.712542   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:27:44.733022   14738 logs.go:276] 1 containers: [858b4500d180]
	I0819 11:27:44.733096   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:27:44.754191   14738 logs.go:276] 0 containers: []
	W0819 11:27:44.754202   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:27:44.754257   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:27:44.791387   14738 logs.go:276] 1 containers: [9f7b4fa8a75c]
	I0819 11:27:44.791402   14738 logs.go:123] Gathering logs for etcd [157fedf83b9a] ...
	I0819 11:27:44.791407   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157fedf83b9a"
	I0819 11:27:44.809007   14738 logs.go:123] Gathering logs for coredns [cfef7301ce2a] ...
	I0819 11:27:44.809020   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfef7301ce2a"
	I0819 11:27:44.821055   14738 logs.go:123] Gathering logs for kube-scheduler [a33e6296238d] ...
	I0819 11:27:44.821066   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a33e6296238d"
	I0819 11:27:44.840255   14738 logs.go:123] Gathering logs for kube-proxy [3977524905de] ...
	I0819 11:27:44.840266   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3977524905de"
	I0819 11:27:44.851791   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:27:44.851802   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:27:44.877132   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:27:44.877141   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:27:44.888684   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:27:44.888696   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:27:44.923197   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:27:44.923205   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:27:44.927347   14738 logs.go:123] Gathering logs for coredns [ef31fb8f1aa5] ...
	I0819 11:27:44.927356   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef31fb8f1aa5"
	I0819 11:27:44.939544   14738 logs.go:123] Gathering logs for kube-controller-manager [858b4500d180] ...
	I0819 11:27:44.939558   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 858b4500d180"
	I0819 11:27:44.961294   14738 logs.go:123] Gathering logs for storage-provisioner [9f7b4fa8a75c] ...
	I0819 11:27:44.961306   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7b4fa8a75c"
	I0819 11:27:44.972584   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:27:44.972596   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:27:45.008644   14738 logs.go:123] Gathering logs for kube-apiserver [16f2b86c071c] ...
	I0819 11:27:45.008653   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16f2b86c071c"
	I0819 11:27:47.524687   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:27:52.527329   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:27:52.527808   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:27:52.566766   14738 logs.go:276] 1 containers: [16f2b86c071c]
	I0819 11:27:52.566897   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:27:52.591044   14738 logs.go:276] 1 containers: [157fedf83b9a]
	I0819 11:27:52.591150   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:27:52.605830   14738 logs.go:276] 4 containers: [91439e4285aa 1aa140af6893 cfef7301ce2a ef31fb8f1aa5]
	I0819 11:27:52.605906   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:27:52.617850   14738 logs.go:276] 1 containers: [a33e6296238d]
	I0819 11:27:52.617912   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:27:52.629150   14738 logs.go:276] 1 containers: [3977524905de]
	I0819 11:27:52.629205   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:27:52.640160   14738 logs.go:276] 1 containers: [858b4500d180]
	I0819 11:27:52.640221   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:27:52.650017   14738 logs.go:276] 0 containers: []
	W0819 11:27:52.650030   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:27:52.650087   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:27:52.660632   14738 logs.go:276] 1 containers: [9f7b4fa8a75c]
	I0819 11:27:52.660649   14738 logs.go:123] Gathering logs for etcd [157fedf83b9a] ...
	I0819 11:27:52.660654   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157fedf83b9a"
	I0819 11:27:52.674394   14738 logs.go:123] Gathering logs for coredns [cfef7301ce2a] ...
	I0819 11:27:52.674407   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfef7301ce2a"
	I0819 11:27:52.685977   14738 logs.go:123] Gathering logs for coredns [ef31fb8f1aa5] ...
	I0819 11:27:52.685990   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef31fb8f1aa5"
	I0819 11:27:52.697395   14738 logs.go:123] Gathering logs for kube-scheduler [a33e6296238d] ...
	I0819 11:27:52.697405   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a33e6296238d"
	I0819 11:27:52.712904   14738 logs.go:123] Gathering logs for kube-proxy [3977524905de] ...
	I0819 11:27:52.712919   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3977524905de"
	I0819 11:27:52.728522   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:27:52.728536   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:27:52.741077   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:27:52.741089   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:27:52.773947   14738 logs.go:123] Gathering logs for coredns [91439e4285aa] ...
	I0819 11:27:52.773955   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91439e4285aa"
	I0819 11:27:52.785038   14738 logs.go:123] Gathering logs for coredns [1aa140af6893] ...
	I0819 11:27:52.785050   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1aa140af6893"
	I0819 11:27:52.796192   14738 logs.go:123] Gathering logs for kube-controller-manager [858b4500d180] ...
	I0819 11:27:52.796203   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 858b4500d180"
	I0819 11:27:52.813702   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:27:52.813716   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:27:52.818356   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:27:52.818366   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:27:52.853319   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:27:52.853330   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:27:52.877630   14738 logs.go:123] Gathering logs for kube-apiserver [16f2b86c071c] ...
	I0819 11:27:52.877638   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16f2b86c071c"
	I0819 11:27:52.891978   14738 logs.go:123] Gathering logs for storage-provisioner [9f7b4fa8a75c] ...
	I0819 11:27:52.891990   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7b4fa8a75c"
	I0819 11:27:55.405562   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:28:00.408352   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:28:00.408429   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:28:00.420031   14738 logs.go:276] 1 containers: [16f2b86c071c]
	I0819 11:28:00.420081   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:28:00.431683   14738 logs.go:276] 1 containers: [157fedf83b9a]
	I0819 11:28:00.431743   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:28:00.442832   14738 logs.go:276] 4 containers: [91439e4285aa 1aa140af6893 cfef7301ce2a ef31fb8f1aa5]
	I0819 11:28:00.442893   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:28:00.453740   14738 logs.go:276] 1 containers: [a33e6296238d]
	I0819 11:28:00.453803   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:28:00.467678   14738 logs.go:276] 1 containers: [3977524905de]
	I0819 11:28:00.467743   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:28:00.479393   14738 logs.go:276] 1 containers: [858b4500d180]
	I0819 11:28:00.479441   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:28:00.489873   14738 logs.go:276] 0 containers: []
	W0819 11:28:00.489885   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:28:00.489940   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:28:00.501105   14738 logs.go:276] 1 containers: [9f7b4fa8a75c]
	I0819 11:28:00.501123   14738 logs.go:123] Gathering logs for kube-apiserver [16f2b86c071c] ...
	I0819 11:28:00.501128   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16f2b86c071c"
	I0819 11:28:00.522042   14738 logs.go:123] Gathering logs for kube-proxy [3977524905de] ...
	I0819 11:28:00.522053   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3977524905de"
	I0819 11:28:00.538217   14738 logs.go:123] Gathering logs for storage-provisioner [9f7b4fa8a75c] ...
	I0819 11:28:00.538224   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7b4fa8a75c"
	I0819 11:28:00.558927   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:28:00.558941   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:28:00.595837   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:28:00.595851   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:28:00.634490   14738 logs.go:123] Gathering logs for etcd [157fedf83b9a] ...
	I0819 11:28:00.634502   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157fedf83b9a"
	I0819 11:28:00.650672   14738 logs.go:123] Gathering logs for coredns [1aa140af6893] ...
	I0819 11:28:00.650688   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1aa140af6893"
	I0819 11:28:00.663759   14738 logs.go:123] Gathering logs for coredns [ef31fb8f1aa5] ...
	I0819 11:28:00.663769   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef31fb8f1aa5"
	I0819 11:28:00.675654   14738 logs.go:123] Gathering logs for coredns [cfef7301ce2a] ...
	I0819 11:28:00.675667   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfef7301ce2a"
	I0819 11:28:00.693467   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:28:00.693479   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:28:00.718888   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:28:00.718899   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:28:00.724376   14738 logs.go:123] Gathering logs for coredns [91439e4285aa] ...
	I0819 11:28:00.724388   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91439e4285aa"
	I0819 11:28:00.737138   14738 logs.go:123] Gathering logs for kube-scheduler [a33e6296238d] ...
	I0819 11:28:00.737149   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a33e6296238d"
	I0819 11:28:00.753197   14738 logs.go:123] Gathering logs for kube-controller-manager [858b4500d180] ...
	I0819 11:28:00.753208   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 858b4500d180"
	I0819 11:28:00.772649   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:28:00.772661   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:28:03.288127   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:28:08.290513   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:28:08.290680   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:28:08.317093   14738 logs.go:276] 1 containers: [16f2b86c071c]
	I0819 11:28:08.317163   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:28:08.334854   14738 logs.go:276] 1 containers: [157fedf83b9a]
	I0819 11:28:08.334951   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:28:08.349177   14738 logs.go:276] 4 containers: [91439e4285aa 1aa140af6893 cfef7301ce2a ef31fb8f1aa5]
	I0819 11:28:08.349254   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:28:08.361512   14738 logs.go:276] 1 containers: [a33e6296238d]
	I0819 11:28:08.361585   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:28:08.374015   14738 logs.go:276] 1 containers: [3977524905de]
	I0819 11:28:08.374089   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:28:08.386550   14738 logs.go:276] 1 containers: [858b4500d180]
	I0819 11:28:08.386624   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:28:08.398200   14738 logs.go:276] 0 containers: []
	W0819 11:28:08.398212   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:28:08.398276   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:28:08.410498   14738 logs.go:276] 1 containers: [9f7b4fa8a75c]
	I0819 11:28:08.410514   14738 logs.go:123] Gathering logs for coredns [ef31fb8f1aa5] ...
	I0819 11:28:08.410520   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef31fb8f1aa5"
	I0819 11:28:08.423369   14738 logs.go:123] Gathering logs for kube-proxy [3977524905de] ...
	I0819 11:28:08.423381   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3977524905de"
	I0819 11:28:08.437059   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:28:08.437072   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:28:08.450631   14738 logs.go:123] Gathering logs for coredns [cfef7301ce2a] ...
	I0819 11:28:08.450645   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfef7301ce2a"
	I0819 11:28:08.464067   14738 logs.go:123] Gathering logs for kube-controller-manager [858b4500d180] ...
	I0819 11:28:08.464081   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 858b4500d180"
	I0819 11:28:08.483885   14738 logs.go:123] Gathering logs for storage-provisioner [9f7b4fa8a75c] ...
	I0819 11:28:08.483894   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7b4fa8a75c"
	I0819 11:28:08.495489   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:28:08.495500   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:28:08.530353   14738 logs.go:123] Gathering logs for coredns [91439e4285aa] ...
	I0819 11:28:08.530365   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91439e4285aa"
	I0819 11:28:08.542602   14738 logs.go:123] Gathering logs for kube-scheduler [a33e6296238d] ...
	I0819 11:28:08.542612   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a33e6296238d"
	I0819 11:28:08.557343   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:28:08.557354   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:28:08.583153   14738 logs.go:123] Gathering logs for etcd [157fedf83b9a] ...
	I0819 11:28:08.583165   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157fedf83b9a"
	I0819 11:28:08.596939   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:28:08.596950   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:28:08.601674   14738 logs.go:123] Gathering logs for kube-apiserver [16f2b86c071c] ...
	I0819 11:28:08.601680   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16f2b86c071c"
	I0819 11:28:08.617075   14738 logs.go:123] Gathering logs for coredns [1aa140af6893] ...
	I0819 11:28:08.617086   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1aa140af6893"
	I0819 11:28:08.628818   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:28:08.628830   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:28:11.162132   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:28:16.163834   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:28:16.164172   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:28:16.194484   14738 logs.go:276] 1 containers: [16f2b86c071c]
	I0819 11:28:16.194610   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:28:16.212603   14738 logs.go:276] 1 containers: [157fedf83b9a]
	I0819 11:28:16.212687   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:28:16.226753   14738 logs.go:276] 4 containers: [91439e4285aa 1aa140af6893 cfef7301ce2a ef31fb8f1aa5]
	I0819 11:28:16.226837   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:28:16.239046   14738 logs.go:276] 1 containers: [a33e6296238d]
	I0819 11:28:16.239107   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:28:16.249094   14738 logs.go:276] 1 containers: [3977524905de]
	I0819 11:28:16.249156   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:28:16.259988   14738 logs.go:276] 1 containers: [858b4500d180]
	I0819 11:28:16.260045   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:28:16.270124   14738 logs.go:276] 0 containers: []
	W0819 11:28:16.270136   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:28:16.270183   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:28:16.281004   14738 logs.go:276] 1 containers: [9f7b4fa8a75c]
	I0819 11:28:16.281027   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:28:16.281033   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:28:16.315493   14738 logs.go:123] Gathering logs for kube-proxy [3977524905de] ...
	I0819 11:28:16.315502   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3977524905de"
	I0819 11:28:16.327632   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:28:16.327643   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:28:16.351152   14738 logs.go:123] Gathering logs for coredns [91439e4285aa] ...
	I0819 11:28:16.351159   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91439e4285aa"
	I0819 11:28:16.362159   14738 logs.go:123] Gathering logs for coredns [1aa140af6893] ...
	I0819 11:28:16.362169   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1aa140af6893"
	I0819 11:28:16.373778   14738 logs.go:123] Gathering logs for kube-scheduler [a33e6296238d] ...
	I0819 11:28:16.373791   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a33e6296238d"
	I0819 11:28:16.390235   14738 logs.go:123] Gathering logs for kube-controller-manager [858b4500d180] ...
	I0819 11:28:16.390248   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 858b4500d180"
	I0819 11:28:16.412214   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:28:16.412226   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:28:16.424092   14738 logs.go:123] Gathering logs for etcd [157fedf83b9a] ...
	I0819 11:28:16.424104   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157fedf83b9a"
	I0819 11:28:16.437722   14738 logs.go:123] Gathering logs for coredns [cfef7301ce2a] ...
	I0819 11:28:16.437731   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfef7301ce2a"
	I0819 11:28:16.449813   14738 logs.go:123] Gathering logs for kube-apiserver [16f2b86c071c] ...
	I0819 11:28:16.449824   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16f2b86c071c"
	I0819 11:28:16.464073   14738 logs.go:123] Gathering logs for coredns [ef31fb8f1aa5] ...
	I0819 11:28:16.464084   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef31fb8f1aa5"
	I0819 11:28:16.475916   14738 logs.go:123] Gathering logs for storage-provisioner [9f7b4fa8a75c] ...
	I0819 11:28:16.475928   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7b4fa8a75c"
	I0819 11:28:16.487272   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:28:16.487282   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:28:16.521805   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:28:16.521812   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:28:19.028300   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:28:24.029956   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:28:24.030028   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:28:24.042853   14738 logs.go:276] 1 containers: [16f2b86c071c]
	I0819 11:28:24.042916   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:28:24.054561   14738 logs.go:276] 1 containers: [157fedf83b9a]
	I0819 11:28:24.054611   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:28:24.065596   14738 logs.go:276] 4 containers: [91439e4285aa 1aa140af6893 cfef7301ce2a ef31fb8f1aa5]
	I0819 11:28:24.065666   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:28:24.077180   14738 logs.go:276] 1 containers: [a33e6296238d]
	I0819 11:28:24.077229   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:28:24.088834   14738 logs.go:276] 1 containers: [3977524905de]
	I0819 11:28:24.088900   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:28:24.101433   14738 logs.go:276] 1 containers: [858b4500d180]
	I0819 11:28:24.101480   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:28:24.111790   14738 logs.go:276] 0 containers: []
	W0819 11:28:24.111799   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:28:24.111854   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:28:24.122977   14738 logs.go:276] 1 containers: [9f7b4fa8a75c]
	I0819 11:28:24.122990   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:28:24.122995   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:28:24.159368   14738 logs.go:123] Gathering logs for kube-apiserver [16f2b86c071c] ...
	I0819 11:28:24.159382   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16f2b86c071c"
	I0819 11:28:24.174683   14738 logs.go:123] Gathering logs for coredns [1aa140af6893] ...
	I0819 11:28:24.174698   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1aa140af6893"
	I0819 11:28:24.188132   14738 logs.go:123] Gathering logs for coredns [ef31fb8f1aa5] ...
	I0819 11:28:24.188141   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef31fb8f1aa5"
	I0819 11:28:24.202144   14738 logs.go:123] Gathering logs for kube-scheduler [a33e6296238d] ...
	I0819 11:28:24.202158   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a33e6296238d"
	I0819 11:28:24.219204   14738 logs.go:123] Gathering logs for kube-proxy [3977524905de] ...
	I0819 11:28:24.219222   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3977524905de"
	I0819 11:28:24.235360   14738 logs.go:123] Gathering logs for coredns [cfef7301ce2a] ...
	I0819 11:28:24.235370   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfef7301ce2a"
	I0819 11:28:24.247049   14738 logs.go:123] Gathering logs for kube-controller-manager [858b4500d180] ...
	I0819 11:28:24.247060   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 858b4500d180"
	I0819 11:28:24.265740   14738 logs.go:123] Gathering logs for storage-provisioner [9f7b4fa8a75c] ...
	I0819 11:28:24.265753   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7b4fa8a75c"
	I0819 11:28:24.278100   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:28:24.278110   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:28:24.290191   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:28:24.290203   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:28:24.294907   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:28:24.294916   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:28:24.330816   14738 logs.go:123] Gathering logs for etcd [157fedf83b9a] ...
	I0819 11:28:24.330828   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157fedf83b9a"
	I0819 11:28:24.345718   14738 logs.go:123] Gathering logs for coredns [91439e4285aa] ...
	I0819 11:28:24.345730   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91439e4285aa"
	I0819 11:28:24.360969   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:28:24.360978   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:28:26.888393   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:28:31.889295   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:28:31.889476   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:28:31.919857   14738 logs.go:276] 1 containers: [16f2b86c071c]
	I0819 11:28:31.919930   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:28:31.934766   14738 logs.go:276] 1 containers: [157fedf83b9a]
	I0819 11:28:31.934833   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:28:31.945459   14738 logs.go:276] 4 containers: [91439e4285aa 1aa140af6893 cfef7301ce2a ef31fb8f1aa5]
	I0819 11:28:31.945526   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:28:31.955791   14738 logs.go:276] 1 containers: [a33e6296238d]
	I0819 11:28:31.955851   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:28:31.966370   14738 logs.go:276] 1 containers: [3977524905de]
	I0819 11:28:31.966434   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:28:31.980286   14738 logs.go:276] 1 containers: [858b4500d180]
	I0819 11:28:31.980356   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:28:31.992005   14738 logs.go:276] 0 containers: []
	W0819 11:28:31.992017   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:28:31.992068   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:28:32.003065   14738 logs.go:276] 1 containers: [9f7b4fa8a75c]
	I0819 11:28:32.003083   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:28:32.003089   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:28:32.035400   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:28:32.035410   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:28:32.068955   14738 logs.go:123] Gathering logs for kube-scheduler [a33e6296238d] ...
	I0819 11:28:32.068964   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a33e6296238d"
	I0819 11:28:32.083706   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:28:32.083719   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:28:32.087856   14738 logs.go:123] Gathering logs for etcd [157fedf83b9a] ...
	I0819 11:28:32.087864   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157fedf83b9a"
	I0819 11:28:32.101531   14738 logs.go:123] Gathering logs for coredns [1aa140af6893] ...
	I0819 11:28:32.101543   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1aa140af6893"
	I0819 11:28:32.119050   14738 logs.go:123] Gathering logs for kube-proxy [3977524905de] ...
	I0819 11:28:32.119063   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3977524905de"
	I0819 11:28:32.130409   14738 logs.go:123] Gathering logs for kube-controller-manager [858b4500d180] ...
	I0819 11:28:32.130418   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 858b4500d180"
	I0819 11:28:32.147355   14738 logs.go:123] Gathering logs for coredns [ef31fb8f1aa5] ...
	I0819 11:28:32.147365   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef31fb8f1aa5"
	I0819 11:28:32.159033   14738 logs.go:123] Gathering logs for kube-apiserver [16f2b86c071c] ...
	I0819 11:28:32.159046   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16f2b86c071c"
	I0819 11:28:32.173503   14738 logs.go:123] Gathering logs for coredns [91439e4285aa] ...
	I0819 11:28:32.173512   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91439e4285aa"
	I0819 11:28:32.188777   14738 logs.go:123] Gathering logs for coredns [cfef7301ce2a] ...
	I0819 11:28:32.188790   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfef7301ce2a"
	I0819 11:28:32.201874   14738 logs.go:123] Gathering logs for storage-provisioner [9f7b4fa8a75c] ...
	I0819 11:28:32.201887   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7b4fa8a75c"
	I0819 11:28:32.213369   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:28:32.213379   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:28:32.237727   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:28:32.237738   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:28:34.751618   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:28:39.753959   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:28:39.754389   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:28:39.794139   14738 logs.go:276] 1 containers: [16f2b86c071c]
	I0819 11:28:39.794262   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:28:39.816448   14738 logs.go:276] 1 containers: [157fedf83b9a]
	I0819 11:28:39.816551   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:28:39.833173   14738 logs.go:276] 4 containers: [91439e4285aa 1aa140af6893 cfef7301ce2a ef31fb8f1aa5]
	I0819 11:28:39.833252   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:28:39.845238   14738 logs.go:276] 1 containers: [a33e6296238d]
	I0819 11:28:39.845312   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:28:39.856378   14738 logs.go:276] 1 containers: [3977524905de]
	I0819 11:28:39.856438   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:28:39.867041   14738 logs.go:276] 1 containers: [858b4500d180]
	I0819 11:28:39.867103   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:28:39.877512   14738 logs.go:276] 0 containers: []
	W0819 11:28:39.877523   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:28:39.877573   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:28:39.887902   14738 logs.go:276] 1 containers: [9f7b4fa8a75c]
	I0819 11:28:39.887918   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:28:39.887925   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:28:39.922352   14738 logs.go:123] Gathering logs for kube-apiserver [16f2b86c071c] ...
	I0819 11:28:39.922363   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16f2b86c071c"
	I0819 11:28:39.937714   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:28:39.937724   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:28:39.949377   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:28:39.949388   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:28:39.953826   14738 logs.go:123] Gathering logs for kube-controller-manager [858b4500d180] ...
	I0819 11:28:39.953833   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 858b4500d180"
	I0819 11:28:39.971722   14738 logs.go:123] Gathering logs for storage-provisioner [9f7b4fa8a75c] ...
	I0819 11:28:39.971733   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7b4fa8a75c"
	I0819 11:28:39.990397   14738 logs.go:123] Gathering logs for kube-proxy [3977524905de] ...
	I0819 11:28:39.990409   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3977524905de"
	I0819 11:28:40.002369   14738 logs.go:123] Gathering logs for kube-scheduler [a33e6296238d] ...
	I0819 11:28:40.002380   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a33e6296238d"
	I0819 11:28:40.017251   14738 logs.go:123] Gathering logs for coredns [91439e4285aa] ...
	I0819 11:28:40.017260   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91439e4285aa"
	I0819 11:28:40.029139   14738 logs.go:123] Gathering logs for etcd [157fedf83b9a] ...
	I0819 11:28:40.029149   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157fedf83b9a"
	I0819 11:28:40.044923   14738 logs.go:123] Gathering logs for coredns [1aa140af6893] ...
	I0819 11:28:40.044932   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1aa140af6893"
	I0819 11:28:40.056784   14738 logs.go:123] Gathering logs for coredns [cfef7301ce2a] ...
	I0819 11:28:40.056795   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfef7301ce2a"
	I0819 11:28:40.068950   14738 logs.go:123] Gathering logs for coredns [ef31fb8f1aa5] ...
	I0819 11:28:40.068960   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef31fb8f1aa5"
	I0819 11:28:40.080355   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:28:40.080365   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:28:40.104996   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:28:40.105006   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:28:42.639777   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:28:47.642455   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:28:47.642515   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:28:47.655937   14738 logs.go:276] 1 containers: [16f2b86c071c]
	I0819 11:28:47.656019   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:28:47.667618   14738 logs.go:276] 1 containers: [157fedf83b9a]
	I0819 11:28:47.667689   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:28:47.678563   14738 logs.go:276] 4 containers: [91439e4285aa 1aa140af6893 cfef7301ce2a ef31fb8f1aa5]
	I0819 11:28:47.678643   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:28:47.690088   14738 logs.go:276] 1 containers: [a33e6296238d]
	I0819 11:28:47.690160   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:28:47.701063   14738 logs.go:276] 1 containers: [3977524905de]
	I0819 11:28:47.701133   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:28:47.713452   14738 logs.go:276] 1 containers: [858b4500d180]
	I0819 11:28:47.713526   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:28:47.724164   14738 logs.go:276] 0 containers: []
	W0819 11:28:47.724175   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:28:47.724232   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:28:47.736574   14738 logs.go:276] 1 containers: [9f7b4fa8a75c]
	I0819 11:28:47.736591   14738 logs.go:123] Gathering logs for etcd [157fedf83b9a] ...
	I0819 11:28:47.736599   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157fedf83b9a"
	I0819 11:28:47.755498   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:28:47.755512   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:28:47.790757   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:28:47.790770   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:28:47.834458   14738 logs.go:123] Gathering logs for coredns [1aa140af6893] ...
	I0819 11:28:47.834471   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1aa140af6893"
	I0819 11:28:47.850064   14738 logs.go:123] Gathering logs for coredns [cfef7301ce2a] ...
	I0819 11:28:47.850074   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfef7301ce2a"
	I0819 11:28:47.861973   14738 logs.go:123] Gathering logs for kube-scheduler [a33e6296238d] ...
	I0819 11:28:47.861984   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a33e6296238d"
	I0819 11:28:47.877855   14738 logs.go:123] Gathering logs for kube-proxy [3977524905de] ...
	I0819 11:28:47.877867   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3977524905de"
	I0819 11:28:47.895119   14738 logs.go:123] Gathering logs for kube-controller-manager [858b4500d180] ...
	I0819 11:28:47.895130   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 858b4500d180"
	I0819 11:28:47.913123   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:28:47.913140   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:28:47.938180   14738 logs.go:123] Gathering logs for kube-apiserver [16f2b86c071c] ...
	I0819 11:28:47.938196   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16f2b86c071c"
	I0819 11:28:47.954220   14738 logs.go:123] Gathering logs for coredns [91439e4285aa] ...
	I0819 11:28:47.954232   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91439e4285aa"
	I0819 11:28:47.972013   14738 logs.go:123] Gathering logs for coredns [ef31fb8f1aa5] ...
	I0819 11:28:47.972021   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef31fb8f1aa5"
	I0819 11:28:47.983456   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:28:47.983467   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:28:47.995764   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:28:47.995775   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:28:48.001003   14738 logs.go:123] Gathering logs for storage-provisioner [9f7b4fa8a75c] ...
	I0819 11:28:48.001014   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7b4fa8a75c"
	I0819 11:28:50.515461   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:28:55.518114   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:28:55.518506   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:28:55.561900   14738 logs.go:276] 1 containers: [16f2b86c071c]
	I0819 11:28:55.562020   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:28:55.581478   14738 logs.go:276] 1 containers: [157fedf83b9a]
	I0819 11:28:55.581563   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:28:55.596756   14738 logs.go:276] 4 containers: [91439e4285aa 1aa140af6893 cfef7301ce2a ef31fb8f1aa5]
	I0819 11:28:55.596827   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:28:55.608943   14738 logs.go:276] 1 containers: [a33e6296238d]
	I0819 11:28:55.609011   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:28:55.620260   14738 logs.go:276] 1 containers: [3977524905de]
	I0819 11:28:55.620323   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:28:55.630903   14738 logs.go:276] 1 containers: [858b4500d180]
	I0819 11:28:55.630966   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:28:55.641867   14738 logs.go:276] 0 containers: []
	W0819 11:28:55.641879   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:28:55.641935   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:28:55.653413   14738 logs.go:276] 1 containers: [9f7b4fa8a75c]
	I0819 11:28:55.653431   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:28:55.653437   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:28:55.657754   14738 logs.go:123] Gathering logs for kube-proxy [3977524905de] ...
	I0819 11:28:55.657761   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3977524905de"
	I0819 11:28:55.669302   14738 logs.go:123] Gathering logs for coredns [ef31fb8f1aa5] ...
	I0819 11:28:55.669315   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef31fb8f1aa5"
	I0819 11:28:55.681269   14738 logs.go:123] Gathering logs for kube-scheduler [a33e6296238d] ...
	I0819 11:28:55.681281   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a33e6296238d"
	I0819 11:28:55.696642   14738 logs.go:123] Gathering logs for storage-provisioner [9f7b4fa8a75c] ...
	I0819 11:28:55.696656   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7b4fa8a75c"
	I0819 11:28:55.708465   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:28:55.708475   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:28:55.742926   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:28:55.742933   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:28:55.777344   14738 logs.go:123] Gathering logs for coredns [1aa140af6893] ...
	I0819 11:28:55.777359   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1aa140af6893"
	I0819 11:28:55.789213   14738 logs.go:123] Gathering logs for coredns [cfef7301ce2a] ...
	I0819 11:28:55.789225   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfef7301ce2a"
	I0819 11:28:55.801059   14738 logs.go:123] Gathering logs for kube-apiserver [16f2b86c071c] ...
	I0819 11:28:55.801070   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16f2b86c071c"
	I0819 11:28:55.814904   14738 logs.go:123] Gathering logs for coredns [91439e4285aa] ...
	I0819 11:28:55.814916   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91439e4285aa"
	I0819 11:28:55.826726   14738 logs.go:123] Gathering logs for kube-controller-manager [858b4500d180] ...
	I0819 11:28:55.826738   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 858b4500d180"
	I0819 11:28:55.844970   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:28:55.844980   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:28:55.856653   14738 logs.go:123] Gathering logs for etcd [157fedf83b9a] ...
	I0819 11:28:55.856666   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157fedf83b9a"
	I0819 11:28:55.871078   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:28:55.871089   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:28:58.395157   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:29:03.395715   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:29:03.396121   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:29:03.427519   14738 logs.go:276] 1 containers: [16f2b86c071c]
	I0819 11:29:03.427645   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:29:03.448576   14738 logs.go:276] 1 containers: [157fedf83b9a]
	I0819 11:29:03.448683   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:29:03.463725   14738 logs.go:276] 4 containers: [91439e4285aa 1aa140af6893 cfef7301ce2a ef31fb8f1aa5]
	I0819 11:29:03.463799   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:29:03.475522   14738 logs.go:276] 1 containers: [a33e6296238d]
	I0819 11:29:03.475588   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:29:03.486285   14738 logs.go:276] 1 containers: [3977524905de]
	I0819 11:29:03.486351   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:29:03.497431   14738 logs.go:276] 1 containers: [858b4500d180]
	I0819 11:29:03.497487   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:29:03.512546   14738 logs.go:276] 0 containers: []
	W0819 11:29:03.512559   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:29:03.512612   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:29:03.526219   14738 logs.go:276] 1 containers: [9f7b4fa8a75c]
	I0819 11:29:03.526237   14738 logs.go:123] Gathering logs for coredns [cfef7301ce2a] ...
	I0819 11:29:03.526242   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfef7301ce2a"
	I0819 11:29:03.538273   14738 logs.go:123] Gathering logs for kube-scheduler [a33e6296238d] ...
	I0819 11:29:03.538285   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a33e6296238d"
	I0819 11:29:03.558648   14738 logs.go:123] Gathering logs for storage-provisioner [9f7b4fa8a75c] ...
	I0819 11:29:03.558662   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7b4fa8a75c"
	I0819 11:29:03.569982   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:29:03.569996   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:29:03.581413   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:29:03.581426   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:29:03.614752   14738 logs.go:123] Gathering logs for coredns [ef31fb8f1aa5] ...
	I0819 11:29:03.614762   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef31fb8f1aa5"
	I0819 11:29:03.631340   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:29:03.631354   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:29:03.654896   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:29:03.654903   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:29:03.658682   14738 logs.go:123] Gathering logs for etcd [157fedf83b9a] ...
	I0819 11:29:03.658690   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157fedf83b9a"
	I0819 11:29:03.672146   14738 logs.go:123] Gathering logs for kube-proxy [3977524905de] ...
	I0819 11:29:03.672156   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3977524905de"
	I0819 11:29:03.684208   14738 logs.go:123] Gathering logs for kube-controller-manager [858b4500d180] ...
	I0819 11:29:03.684218   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 858b4500d180"
	I0819 11:29:03.702376   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:29:03.702387   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:29:03.734636   14738 logs.go:123] Gathering logs for kube-apiserver [16f2b86c071c] ...
	I0819 11:29:03.734643   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16f2b86c071c"
	I0819 11:29:03.749377   14738 logs.go:123] Gathering logs for coredns [91439e4285aa] ...
	I0819 11:29:03.749389   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91439e4285aa"
	I0819 11:29:03.760765   14738 logs.go:123] Gathering logs for coredns [1aa140af6893] ...
	I0819 11:29:03.760777   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1aa140af6893"
	I0819 11:29:06.274809   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:29:11.277375   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:29:11.277451   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:29:11.291412   14738 logs.go:276] 1 containers: [16f2b86c071c]
	I0819 11:29:11.291473   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:29:11.303139   14738 logs.go:276] 1 containers: [157fedf83b9a]
	I0819 11:29:11.303204   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:29:11.315548   14738 logs.go:276] 4 containers: [91439e4285aa 1aa140af6893 cfef7301ce2a ef31fb8f1aa5]
	I0819 11:29:11.315605   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:29:11.326943   14738 logs.go:276] 1 containers: [a33e6296238d]
	I0819 11:29:11.326993   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:29:11.338232   14738 logs.go:276] 1 containers: [3977524905de]
	I0819 11:29:11.338292   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:29:11.349650   14738 logs.go:276] 1 containers: [858b4500d180]
	I0819 11:29:11.349729   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:29:11.362215   14738 logs.go:276] 0 containers: []
	W0819 11:29:11.362227   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:29:11.362302   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:29:11.373931   14738 logs.go:276] 1 containers: [9f7b4fa8a75c]
	I0819 11:29:11.373951   14738 logs.go:123] Gathering logs for coredns [ef31fb8f1aa5] ...
	I0819 11:29:11.373957   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef31fb8f1aa5"
	I0819 11:29:11.386575   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:29:11.386586   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:29:11.413455   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:29:11.413466   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:29:11.432637   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:29:11.432650   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:29:11.467228   14738 logs.go:123] Gathering logs for coredns [1aa140af6893] ...
	I0819 11:29:11.467249   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1aa140af6893"
	I0819 11:29:11.479697   14738 logs.go:123] Gathering logs for coredns [91439e4285aa] ...
	I0819 11:29:11.479710   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91439e4285aa"
	I0819 11:29:11.492300   14738 logs.go:123] Gathering logs for kube-controller-manager [858b4500d180] ...
	I0819 11:29:11.492312   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 858b4500d180"
	I0819 11:29:11.511283   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:29:11.511294   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:29:11.516424   14738 logs.go:123] Gathering logs for kube-apiserver [16f2b86c071c] ...
	I0819 11:29:11.516436   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16f2b86c071c"
	I0819 11:29:11.534047   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:29:11.534058   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:29:11.570214   14738 logs.go:123] Gathering logs for coredns [cfef7301ce2a] ...
	I0819 11:29:11.570228   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfef7301ce2a"
	I0819 11:29:11.583328   14738 logs.go:123] Gathering logs for kube-proxy [3977524905de] ...
	I0819 11:29:11.583339   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3977524905de"
	I0819 11:29:11.596601   14738 logs.go:123] Gathering logs for storage-provisioner [9f7b4fa8a75c] ...
	I0819 11:29:11.596617   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7b4fa8a75c"
	I0819 11:29:11.609120   14738 logs.go:123] Gathering logs for etcd [157fedf83b9a] ...
	I0819 11:29:11.609133   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157fedf83b9a"
	I0819 11:29:11.623652   14738 logs.go:123] Gathering logs for kube-scheduler [a33e6296238d] ...
	I0819 11:29:11.623663   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a33e6296238d"
	I0819 11:29:14.141884   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:29:19.144331   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:29:19.144764   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:29:19.187017   14738 logs.go:276] 1 containers: [16f2b86c071c]
	I0819 11:29:19.187155   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:29:19.209146   14738 logs.go:276] 1 containers: [157fedf83b9a]
	I0819 11:29:19.209252   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:29:19.234251   14738 logs.go:276] 4 containers: [91439e4285aa 1aa140af6893 cfef7301ce2a ef31fb8f1aa5]
	I0819 11:29:19.234328   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:29:19.245853   14738 logs.go:276] 1 containers: [a33e6296238d]
	I0819 11:29:19.245916   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:29:19.256295   14738 logs.go:276] 1 containers: [3977524905de]
	I0819 11:29:19.256362   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:29:19.270054   14738 logs.go:276] 1 containers: [858b4500d180]
	I0819 11:29:19.270115   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:29:19.280639   14738 logs.go:276] 0 containers: []
	W0819 11:29:19.280648   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:29:19.280701   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:29:19.291123   14738 logs.go:276] 1 containers: [9f7b4fa8a75c]
	I0819 11:29:19.291139   14738 logs.go:123] Gathering logs for kube-controller-manager [858b4500d180] ...
	I0819 11:29:19.291144   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 858b4500d180"
	I0819 11:29:19.308505   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:29:19.308516   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:29:19.331783   14738 logs.go:123] Gathering logs for kube-proxy [3977524905de] ...
	I0819 11:29:19.331792   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3977524905de"
	I0819 11:29:19.344732   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:29:19.344741   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:29:19.356255   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:29:19.356264   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:29:19.390354   14738 logs.go:123] Gathering logs for kube-apiserver [16f2b86c071c] ...
	I0819 11:29:19.390363   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16f2b86c071c"
	I0819 11:29:19.404401   14738 logs.go:123] Gathering logs for etcd [157fedf83b9a] ...
	I0819 11:29:19.404411   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157fedf83b9a"
	I0819 11:29:19.420625   14738 logs.go:123] Gathering logs for coredns [1aa140af6893] ...
	I0819 11:29:19.420638   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1aa140af6893"
	I0819 11:29:19.432578   14738 logs.go:123] Gathering logs for kube-scheduler [a33e6296238d] ...
	I0819 11:29:19.432588   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a33e6296238d"
	I0819 11:29:19.447273   14738 logs.go:123] Gathering logs for coredns [ef31fb8f1aa5] ...
	I0819 11:29:19.447284   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef31fb8f1aa5"
	I0819 11:29:19.459158   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:29:19.459173   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:29:19.464045   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:29:19.464051   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:29:19.497908   14738 logs.go:123] Gathering logs for coredns [91439e4285aa] ...
	I0819 11:29:19.497919   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91439e4285aa"
	I0819 11:29:19.510064   14738 logs.go:123] Gathering logs for coredns [cfef7301ce2a] ...
	I0819 11:29:19.510075   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfef7301ce2a"
	I0819 11:29:19.527522   14738 logs.go:123] Gathering logs for storage-provisioner [9f7b4fa8a75c] ...
	I0819 11:29:19.527532   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7b4fa8a75c"
	I0819 11:29:22.040999   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:29:27.043713   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:29:27.044062   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:29:27.073629   14738 logs.go:276] 1 containers: [16f2b86c071c]
	I0819 11:29:27.073745   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:29:27.096714   14738 logs.go:276] 1 containers: [157fedf83b9a]
	I0819 11:29:27.096784   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:29:27.110565   14738 logs.go:276] 4 containers: [91439e4285aa 1aa140af6893 cfef7301ce2a ef31fb8f1aa5]
	I0819 11:29:27.110642   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:29:27.121821   14738 logs.go:276] 1 containers: [a33e6296238d]
	I0819 11:29:27.121890   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:29:27.132131   14738 logs.go:276] 1 containers: [3977524905de]
	I0819 11:29:27.132189   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:29:27.141988   14738 logs.go:276] 1 containers: [858b4500d180]
	I0819 11:29:27.142045   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:29:27.152197   14738 logs.go:276] 0 containers: []
	W0819 11:29:27.152205   14738 logs.go:278] No container was found matching "kindnet"
	I0819 11:29:27.152253   14738 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:29:27.162707   14738 logs.go:276] 1 containers: [9f7b4fa8a75c]
	I0819 11:29:27.162723   14738 logs.go:123] Gathering logs for coredns [91439e4285aa] ...
	I0819 11:29:27.162728   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91439e4285aa"
	I0819 11:29:27.174248   14738 logs.go:123] Gathering logs for coredns [cfef7301ce2a] ...
	I0819 11:29:27.174258   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfef7301ce2a"
	I0819 11:29:27.185977   14738 logs.go:123] Gathering logs for kube-proxy [3977524905de] ...
	I0819 11:29:27.185987   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3977524905de"
	I0819 11:29:27.197376   14738 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:29:27.197386   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:29:27.231052   14738 logs.go:123] Gathering logs for etcd [157fedf83b9a] ...
	I0819 11:29:27.231067   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157fedf83b9a"
	I0819 11:29:27.245296   14738 logs.go:123] Gathering logs for storage-provisioner [9f7b4fa8a75c] ...
	I0819 11:29:27.245306   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7b4fa8a75c"
	I0819 11:29:27.256840   14738 logs.go:123] Gathering logs for dmesg ...
	I0819 11:29:27.256852   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:29:27.261161   14738 logs.go:123] Gathering logs for kube-apiserver [16f2b86c071c] ...
	I0819 11:29:27.261169   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16f2b86c071c"
	I0819 11:29:27.276578   14738 logs.go:123] Gathering logs for coredns [1aa140af6893] ...
	I0819 11:29:27.276588   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1aa140af6893"
	I0819 11:29:27.288776   14738 logs.go:123] Gathering logs for coredns [ef31fb8f1aa5] ...
	I0819 11:29:27.288789   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef31fb8f1aa5"
	I0819 11:29:27.300137   14738 logs.go:123] Gathering logs for kube-scheduler [a33e6296238d] ...
	I0819 11:29:27.300149   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a33e6296238d"
	I0819 11:29:27.320895   14738 logs.go:123] Gathering logs for Docker ...
	I0819 11:29:27.320907   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:29:27.343672   14738 logs.go:123] Gathering logs for kubelet ...
	I0819 11:29:27.343677   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:29:27.375689   14738 logs.go:123] Gathering logs for container status ...
	I0819 11:29:27.375698   14738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:29:27.387749   14738 logs.go:123] Gathering logs for kube-controller-manager [858b4500d180] ...
	I0819 11:29:27.387760   14738 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 858b4500d180"
	I0819 11:29:29.911860   14738 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:29:34.912644   14738 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:29:34.922620   14738 out.go:201] 
	W0819 11:29:34.928599   14738 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0819 11:29:34.928611   14738 out.go:270] * 
	* 
	W0819 11:29:34.929080   14738 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:29:34.941603   14738 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-163000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (576.01s)

                                                
                                    
x
+
TestPause/serial/Start (10.12s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-907000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-907000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (10.075980666s)

                                                
                                                
-- stdout --
	* [pause-907000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-907000" primary control-plane node in "pause-907000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-907000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-907000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-907000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-907000 -n pause-907000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-907000 -n pause-907000: exit status 7 (48.164417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-907000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-837000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-837000 --driver=qemu2 : exit status 80 (9.799821s)

                                                
                                                
-- stdout --
	* [NoKubernetes-837000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-837000" primary control-plane node in "NoKubernetes-837000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-837000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-837000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-837000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-837000 -n NoKubernetes-837000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-837000 -n NoKubernetes-837000: exit status 7 (43.845709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-837000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-837000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-837000 --no-kubernetes --driver=qemu2 : exit status 80 (5.259472458s)

                                                
                                                
-- stdout --
	* [NoKubernetes-837000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-837000
	* Restarting existing qemu2 VM for "NoKubernetes-837000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-837000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-837000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-837000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-837000 -n NoKubernetes-837000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-837000 -n NoKubernetes-837000: exit status 7 (65.169292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-837000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-837000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-837000 --no-kubernetes --driver=qemu2 : exit status 80 (5.246783792s)

                                                
                                                
-- stdout --
	* [NoKubernetes-837000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-837000
	* Restarting existing qemu2 VM for "NoKubernetes-837000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-837000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-837000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-837000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-837000 -n NoKubernetes-837000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-837000 -n NoKubernetes-837000: exit status 7 (52.670542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-837000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-837000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-837000 --driver=qemu2 : exit status 80 (5.281896125s)

                                                
                                                
-- stdout --
	* [NoKubernetes-837000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-837000
	* Restarting existing qemu2 VM for "NoKubernetes-837000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-837000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-837000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-837000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-837000 -n NoKubernetes-837000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-837000 -n NoKubernetes-837000: exit status 7 (60.164167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-837000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-150000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-150000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.976023417s)

                                                
                                                
-- stdout --
	* [custom-flannel-150000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-150000" primary control-plane node in "custom-flannel-150000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-150000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:27:48.463247   15099 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:27:48.463374   15099 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:27:48.463377   15099 out.go:358] Setting ErrFile to fd 2...
	I0819 11:27:48.463379   15099 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:27:48.463507   15099 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:27:48.464562   15099 out.go:352] Setting JSON to false
	I0819 11:27:48.481367   15099 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7035,"bootTime":1724085033,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:27:48.481444   15099 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:27:48.488251   15099 out.go:177] * [custom-flannel-150000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:27:48.495952   15099 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 11:27:48.496050   15099 notify.go:220] Checking for updates...
	I0819 11:27:48.502059   15099 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:27:48.503602   15099 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:27:48.507038   15099 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:27:48.510051   15099 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	I0819 11:27:48.513144   15099 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:27:48.517423   15099 config.go:182] Loaded profile config "multinode-540000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:27:48.517486   15099 config.go:182] Loaded profile config "stopped-upgrade-163000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 11:27:48.517524   15099 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:27:48.522378   15099 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:27:48.527019   15099 start.go:297] selected driver: qemu2
	I0819 11:27:48.527025   15099 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:27:48.527031   15099 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:27:48.529485   15099 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:27:48.532042   15099 out.go:177] * Automatically selected the socket_vmnet network
	I0819 11:27:48.535180   15099 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:27:48.535224   15099 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0819 11:27:48.535233   15099 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0819 11:27:48.535281   15099 start.go:340] cluster config:
	{Name:custom-flannel-150000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-150000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:27:48.538927   15099 iso.go:125] acquiring lock: {Name:mk1182fa87ba49f1e009b3ded77c456c9e9e8e4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:27:48.546056   15099 out.go:177] * Starting "custom-flannel-150000" primary control-plane node in "custom-flannel-150000" cluster
	I0819 11:27:48.550071   15099 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:27:48.550086   15099 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:27:48.550096   15099 cache.go:56] Caching tarball of preloaded images
	I0819 11:27:48.550150   15099 preload.go:172] Found /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:27:48.550154   15099 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:27:48.550224   15099 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/custom-flannel-150000/config.json ...
	I0819 11:27:48.550235   15099 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/custom-flannel-150000/config.json: {Name:mk802e48ddd756767e7f295a8d1782c0c8b9ef9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:27:48.550540   15099 start.go:360] acquireMachinesLock for custom-flannel-150000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:27:48.550576   15099 start.go:364] duration metric: took 26.416µs to acquireMachinesLock for "custom-flannel-150000"
	I0819 11:27:48.550589   15099 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-150000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-150000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:27:48.550618   15099 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:27:48.559031   15099 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 11:27:48.574428   15099 start.go:159] libmachine.API.Create for "custom-flannel-150000" (driver="qemu2")
	I0819 11:27:48.574448   15099 client.go:168] LocalClient.Create starting
	I0819 11:27:48.574509   15099 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:27:48.574537   15099 main.go:141] libmachine: Decoding PEM data...
	I0819 11:27:48.574545   15099 main.go:141] libmachine: Parsing certificate...
	I0819 11:27:48.574580   15099 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:27:48.574603   15099 main.go:141] libmachine: Decoding PEM data...
	I0819 11:27:48.574610   15099 main.go:141] libmachine: Parsing certificate...
	I0819 11:27:48.574955   15099 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:27:48.788795   15099 main.go:141] libmachine: Creating SSH key...
	I0819 11:27:48.986565   15099 main.go:141] libmachine: Creating Disk image...
	I0819 11:27:48.986577   15099 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:27:48.986820   15099 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/custom-flannel-150000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/custom-flannel-150000/disk.qcow2
	I0819 11:27:48.996537   15099 main.go:141] libmachine: STDOUT: 
	I0819 11:27:48.996557   15099 main.go:141] libmachine: STDERR: 
	I0819 11:27:48.996607   15099 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/custom-flannel-150000/disk.qcow2 +20000M
	I0819 11:27:49.004967   15099 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:27:49.004982   15099 main.go:141] libmachine: STDERR: 
	I0819 11:27:49.004999   15099 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/custom-flannel-150000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/custom-flannel-150000/disk.qcow2
	I0819 11:27:49.005005   15099 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:27:49.005018   15099 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:27:49.005048   15099 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/custom-flannel-150000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/custom-flannel-150000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/custom-flannel-150000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:b8:8d:b3:d0:fc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/custom-flannel-150000/disk.qcow2
	I0819 11:27:49.006640   15099 main.go:141] libmachine: STDOUT: 
	I0819 11:27:49.006661   15099 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:27:49.006681   15099 client.go:171] duration metric: took 432.224083ms to LocalClient.Create
	I0819 11:27:51.009004   15099 start.go:128] duration metric: took 2.458327209s to createHost
	I0819 11:27:51.009080   15099 start.go:83] releasing machines lock for "custom-flannel-150000", held for 2.4584695s
	W0819 11:27:51.009145   15099 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:27:51.023480   15099 out.go:177] * Deleting "custom-flannel-150000" in qemu2 ...
	W0819 11:27:51.054753   15099 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:27:51.054792   15099 start.go:729] Will try again in 5 seconds ...
	I0819 11:27:56.057098   15099 start.go:360] acquireMachinesLock for custom-flannel-150000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:27:56.057597   15099 start.go:364] duration metric: took 390.875µs to acquireMachinesLock for "custom-flannel-150000"
	I0819 11:27:56.057677   15099 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-150000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-150000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:27:56.057962   15099 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:27:56.063795   15099 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 11:27:56.109920   15099 start.go:159] libmachine.API.Create for "custom-flannel-150000" (driver="qemu2")
	I0819 11:27:56.109964   15099 client.go:168] LocalClient.Create starting
	I0819 11:27:56.110080   15099 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:27:56.110144   15099 main.go:141] libmachine: Decoding PEM data...
	I0819 11:27:56.110162   15099 main.go:141] libmachine: Parsing certificate...
	I0819 11:27:56.110221   15099 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:27:56.110263   15099 main.go:141] libmachine: Decoding PEM data...
	I0819 11:27:56.110275   15099 main.go:141] libmachine: Parsing certificate...
	I0819 11:27:56.110834   15099 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:27:56.268276   15099 main.go:141] libmachine: Creating SSH key...
	I0819 11:27:56.349877   15099 main.go:141] libmachine: Creating Disk image...
	I0819 11:27:56.349883   15099 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:27:56.350117   15099 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/custom-flannel-150000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/custom-flannel-150000/disk.qcow2
	I0819 11:27:56.359646   15099 main.go:141] libmachine: STDOUT: 
	I0819 11:27:56.359680   15099 main.go:141] libmachine: STDERR: 
	I0819 11:27:56.359733   15099 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/custom-flannel-150000/disk.qcow2 +20000M
	I0819 11:27:56.367800   15099 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:27:56.367819   15099 main.go:141] libmachine: STDERR: 
	I0819 11:27:56.367831   15099 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/custom-flannel-150000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/custom-flannel-150000/disk.qcow2
	I0819 11:27:56.367836   15099 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:27:56.367846   15099 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:27:56.367876   15099 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/custom-flannel-150000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/custom-flannel-150000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/custom-flannel-150000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:10:c1:23:38:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/custom-flannel-150000/disk.qcow2
	I0819 11:27:56.369541   15099 main.go:141] libmachine: STDOUT: 
	I0819 11:27:56.369561   15099 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:27:56.369572   15099 client.go:171] duration metric: took 259.602417ms to LocalClient.Create
	I0819 11:27:58.371784   15099 start.go:128] duration metric: took 2.313776166s to createHost
	I0819 11:27:58.371858   15099 start.go:83] releasing machines lock for "custom-flannel-150000", held for 2.314229959s
	W0819 11:27:58.372351   15099 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-150000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-150000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:27:58.381853   15099 out.go:201] 
	W0819 11:27:58.386927   15099 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:27:58.386947   15099 out.go:270] * 
	* 
	W0819 11:27:58.389100   15099 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:27:58.398898   15099 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-150000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-150000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.775542s)

                                                
                                                
-- stdout --
	* [auto-150000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-150000" primary control-plane node in "auto-150000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-150000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:28:00.863773   15220 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:28:00.863897   15220 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:28:00.863900   15220 out.go:358] Setting ErrFile to fd 2...
	I0819 11:28:00.863902   15220 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:28:00.864053   15220 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:28:00.865156   15220 out.go:352] Setting JSON to false
	I0819 11:28:00.881893   15220 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7047,"bootTime":1724085033,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:28:00.881967   15220 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:28:00.889176   15220 out.go:177] * [auto-150000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:28:00.897104   15220 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 11:28:00.897179   15220 notify.go:220] Checking for updates...
	I0819 11:28:00.904091   15220 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:28:00.907069   15220 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:28:00.910092   15220 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:28:00.912998   15220 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	I0819 11:28:00.916117   15220 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:28:00.919419   15220 config.go:182] Loaded profile config "multinode-540000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:28:00.919480   15220 config.go:182] Loaded profile config "stopped-upgrade-163000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 11:28:00.919527   15220 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:28:00.924034   15220 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:28:00.931094   15220 start.go:297] selected driver: qemu2
	I0819 11:28:00.931102   15220 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:28:00.931108   15220 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:28:00.933372   15220 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:28:00.936022   15220 out.go:177] * Automatically selected the socket_vmnet network
	I0819 11:28:00.939246   15220 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:28:00.939286   15220 cni.go:84] Creating CNI manager for ""
	I0819 11:28:00.939293   15220 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:28:00.939301   15220 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 11:28:00.939329   15220 start.go:340] cluster config:
	{Name:auto-150000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:auto-150000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:28:00.942867   15220 iso.go:125] acquiring lock: {Name:mk1182fa87ba49f1e009b3ded77c456c9e9e8e4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:28:00.948119   15220 out.go:177] * Starting "auto-150000" primary control-plane node in "auto-150000" cluster
	I0819 11:28:00.952088   15220 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:28:00.952103   15220 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:28:00.952114   15220 cache.go:56] Caching tarball of preloaded images
	I0819 11:28:00.952178   15220 preload.go:172] Found /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:28:00.952184   15220 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:28:00.952252   15220 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/auto-150000/config.json ...
	I0819 11:28:00.952261   15220 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/auto-150000/config.json: {Name:mkddf0c5e02126b4f16916971f93479b5c2bbfa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:28:00.952476   15220 start.go:360] acquireMachinesLock for auto-150000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:28:00.952505   15220 start.go:364] duration metric: took 24µs to acquireMachinesLock for "auto-150000"
	I0819 11:28:00.952517   15220 start.go:93] Provisioning new machine with config: &{Name:auto-150000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:auto-150000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:28:00.952543   15220 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:28:00.960053   15220 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 11:28:00.975937   15220 start.go:159] libmachine.API.Create for "auto-150000" (driver="qemu2")
	I0819 11:28:00.975969   15220 client.go:168] LocalClient.Create starting
	I0819 11:28:00.976032   15220 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:28:00.976061   15220 main.go:141] libmachine: Decoding PEM data...
	I0819 11:28:00.976070   15220 main.go:141] libmachine: Parsing certificate...
	I0819 11:28:00.976103   15220 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:28:00.976128   15220 main.go:141] libmachine: Decoding PEM data...
	I0819 11:28:00.976135   15220 main.go:141] libmachine: Parsing certificate...
	I0819 11:28:00.976502   15220 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:28:01.124025   15220 main.go:141] libmachine: Creating SSH key...
	I0819 11:28:01.198703   15220 main.go:141] libmachine: Creating Disk image...
	I0819 11:28:01.198708   15220 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:28:01.198934   15220 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/auto-150000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/auto-150000/disk.qcow2
	I0819 11:28:01.208294   15220 main.go:141] libmachine: STDOUT: 
	I0819 11:28:01.208311   15220 main.go:141] libmachine: STDERR: 
	I0819 11:28:01.208354   15220 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/auto-150000/disk.qcow2 +20000M
	I0819 11:28:01.216335   15220 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:28:01.216353   15220 main.go:141] libmachine: STDERR: 
	I0819 11:28:01.216371   15220 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/auto-150000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/auto-150000/disk.qcow2
	I0819 11:28:01.216375   15220 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:28:01.216386   15220 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:28:01.216412   15220 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/auto-150000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/auto-150000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/auto-150000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:3a:1f:c0:44:00 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/auto-150000/disk.qcow2
	I0819 11:28:01.217987   15220 main.go:141] libmachine: STDOUT: 
	I0819 11:28:01.218005   15220 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:28:01.218022   15220 client.go:171] duration metric: took 242.047833ms to LocalClient.Create
	I0819 11:28:03.220251   15220 start.go:128] duration metric: took 2.267673875s to createHost
	I0819 11:28:03.220350   15220 start.go:83] releasing machines lock for "auto-150000", held for 2.267831625s
	W0819 11:28:03.220479   15220 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:28:03.231684   15220 out.go:177] * Deleting "auto-150000" in qemu2 ...
	W0819 11:28:03.263762   15220 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:28:03.263849   15220 start.go:729] Will try again in 5 seconds ...
	I0819 11:28:08.266042   15220 start.go:360] acquireMachinesLock for auto-150000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:28:08.266680   15220 start.go:364] duration metric: took 512.292µs to acquireMachinesLock for "auto-150000"
	I0819 11:28:08.266868   15220 start.go:93] Provisioning new machine with config: &{Name:auto-150000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:auto-150000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:28:08.267091   15220 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:28:08.271679   15220 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 11:28:08.315508   15220 start.go:159] libmachine.API.Create for "auto-150000" (driver="qemu2")
	I0819 11:28:08.315573   15220 client.go:168] LocalClient.Create starting
	I0819 11:28:08.315696   15220 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:28:08.315775   15220 main.go:141] libmachine: Decoding PEM data...
	I0819 11:28:08.315792   15220 main.go:141] libmachine: Parsing certificate...
	I0819 11:28:08.315864   15220 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:28:08.315915   15220 main.go:141] libmachine: Decoding PEM data...
	I0819 11:28:08.315934   15220 main.go:141] libmachine: Parsing certificate...
	I0819 11:28:08.316538   15220 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:28:08.473589   15220 main.go:141] libmachine: Creating SSH key...
	I0819 11:28:08.548260   15220 main.go:141] libmachine: Creating Disk image...
	I0819 11:28:08.548271   15220 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:28:08.548551   15220 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/auto-150000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/auto-150000/disk.qcow2
	I0819 11:28:08.558885   15220 main.go:141] libmachine: STDOUT: 
	I0819 11:28:08.558908   15220 main.go:141] libmachine: STDERR: 
	I0819 11:28:08.558975   15220 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/auto-150000/disk.qcow2 +20000M
	I0819 11:28:08.568036   15220 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:28:08.568056   15220 main.go:141] libmachine: STDERR: 
	I0819 11:28:08.568070   15220 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/auto-150000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/auto-150000/disk.qcow2
	I0819 11:28:08.568077   15220 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:28:08.568089   15220 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:28:08.568130   15220 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/auto-150000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/auto-150000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/auto-150000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:07:12:10:8b:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/auto-150000/disk.qcow2
	I0819 11:28:08.569877   15220 main.go:141] libmachine: STDOUT: 
	I0819 11:28:08.569893   15220 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:28:08.569906   15220 client.go:171] duration metric: took 254.327584ms to LocalClient.Create
	I0819 11:28:10.572090   15220 start.go:128] duration metric: took 2.304963375s to createHost
	I0819 11:28:10.572165   15220 start.go:83] releasing machines lock for "auto-150000", held for 2.305433542s
	W0819 11:28:10.572613   15220 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-150000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-150000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:28:10.582114   15220 out.go:201] 
	W0819 11:28:10.589405   15220 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:28:10.589430   15220 out.go:270] * 
	* 
	W0819 11:28:10.591013   15220 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:28:10.601202   15220 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-150000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-150000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.802929416s)

                                                
                                                
-- stdout --
	* [false-150000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-150000" primary control-plane node in "false-150000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-150000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:28:12.775717   15333 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:28:12.775865   15333 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:28:12.775868   15333 out.go:358] Setting ErrFile to fd 2...
	I0819 11:28:12.775870   15333 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:28:12.775997   15333 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:28:12.777074   15333 out.go:352] Setting JSON to false
	I0819 11:28:12.793855   15333 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7059,"bootTime":1724085033,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:28:12.793939   15333 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:28:12.798339   15333 out.go:177] * [false-150000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:28:12.806372   15333 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 11:28:12.806450   15333 notify.go:220] Checking for updates...
	I0819 11:28:12.813358   15333 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:28:12.816316   15333 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:28:12.819255   15333 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:28:12.822281   15333 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	I0819 11:28:12.825364   15333 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:28:12.828669   15333 config.go:182] Loaded profile config "multinode-540000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:28:12.828730   15333 config.go:182] Loaded profile config "stopped-upgrade-163000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 11:28:12.828773   15333 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:28:12.833304   15333 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:28:12.840309   15333 start.go:297] selected driver: qemu2
	I0819 11:28:12.840315   15333 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:28:12.840321   15333 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:28:12.842323   15333 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:28:12.845282   15333 out.go:177] * Automatically selected the socket_vmnet network
	I0819 11:28:12.848372   15333 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:28:12.848405   15333 cni.go:84] Creating CNI manager for "false"
	I0819 11:28:12.848430   15333 start.go:340] cluster config:
	{Name:false-150000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:false-150000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:28:12.851639   15333 iso.go:125] acquiring lock: {Name:mk1182fa87ba49f1e009b3ded77c456c9e9e8e4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:28:12.859283   15333 out.go:177] * Starting "false-150000" primary control-plane node in "false-150000" cluster
	I0819 11:28:12.863186   15333 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:28:12.863202   15333 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:28:12.863214   15333 cache.go:56] Caching tarball of preloaded images
	I0819 11:28:12.863277   15333 preload.go:172] Found /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:28:12.863282   15333 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:28:12.863359   15333 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/false-150000/config.json ...
	I0819 11:28:12.863377   15333 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/false-150000/config.json: {Name:mk59533552d6e074672a7ffe41fb717ab076590b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:28:12.863706   15333 start.go:360] acquireMachinesLock for false-150000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:28:12.863736   15333 start.go:364] duration metric: took 24.875µs to acquireMachinesLock for "false-150000"
	I0819 11:28:12.863747   15333 start.go:93] Provisioning new machine with config: &{Name:false-150000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:false-150000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:28:12.863776   15333 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:28:12.868354   15333 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 11:28:12.883729   15333 start.go:159] libmachine.API.Create for "false-150000" (driver="qemu2")
	I0819 11:28:12.883754   15333 client.go:168] LocalClient.Create starting
	I0819 11:28:12.883812   15333 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:28:12.883841   15333 main.go:141] libmachine: Decoding PEM data...
	I0819 11:28:12.883851   15333 main.go:141] libmachine: Parsing certificate...
	I0819 11:28:12.883893   15333 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:28:12.883916   15333 main.go:141] libmachine: Decoding PEM data...
	I0819 11:28:12.883929   15333 main.go:141] libmachine: Parsing certificate...
	I0819 11:28:12.884258   15333 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:28:13.033721   15333 main.go:141] libmachine: Creating SSH key...
	I0819 11:28:13.167126   15333 main.go:141] libmachine: Creating Disk image...
	I0819 11:28:13.167144   15333 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:28:13.170745   15333 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/false-150000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/false-150000/disk.qcow2
	I0819 11:28:13.180989   15333 main.go:141] libmachine: STDOUT: 
	I0819 11:28:13.181007   15333 main.go:141] libmachine: STDERR: 
	I0819 11:28:13.181049   15333 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/false-150000/disk.qcow2 +20000M
	I0819 11:28:13.189285   15333 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:28:13.189305   15333 main.go:141] libmachine: STDERR: 
	I0819 11:28:13.189318   15333 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/false-150000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/false-150000/disk.qcow2
	I0819 11:28:13.189326   15333 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:28:13.189337   15333 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:28:13.189367   15333 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/false-150000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/false-150000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/false-150000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:f8:dd:25:46:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/false-150000/disk.qcow2
	I0819 11:28:13.191036   15333 main.go:141] libmachine: STDOUT: 
	I0819 11:28:13.191050   15333 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:28:13.191069   15333 client.go:171] duration metric: took 307.310583ms to LocalClient.Create
	I0819 11:28:15.193295   15333 start.go:128] duration metric: took 2.329490209s to createHost
	I0819 11:28:15.193386   15333 start.go:83] releasing machines lock for "false-150000", held for 2.329644458s
	W0819 11:28:15.193427   15333 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:28:15.200802   15333 out.go:177] * Deleting "false-150000" in qemu2 ...
	W0819 11:28:15.231320   15333 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:28:15.231351   15333 start.go:729] Will try again in 5 seconds ...
	I0819 11:28:20.233548   15333 start.go:360] acquireMachinesLock for false-150000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:28:20.234011   15333 start.go:364] duration metric: took 367.625µs to acquireMachinesLock for "false-150000"
	I0819 11:28:20.234140   15333 start.go:93] Provisioning new machine with config: &{Name:false-150000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:false-150000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:28:20.234361   15333 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:28:20.244841   15333 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 11:28:20.289293   15333 start.go:159] libmachine.API.Create for "false-150000" (driver="qemu2")
	I0819 11:28:20.289358   15333 client.go:168] LocalClient.Create starting
	I0819 11:28:20.289490   15333 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:28:20.289553   15333 main.go:141] libmachine: Decoding PEM data...
	I0819 11:28:20.289573   15333 main.go:141] libmachine: Parsing certificate...
	I0819 11:28:20.289663   15333 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:28:20.289711   15333 main.go:141] libmachine: Decoding PEM data...
	I0819 11:28:20.289723   15333 main.go:141] libmachine: Parsing certificate...
	I0819 11:28:20.290425   15333 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:28:20.445466   15333 main.go:141] libmachine: Creating SSH key...
	I0819 11:28:20.480874   15333 main.go:141] libmachine: Creating Disk image...
	I0819 11:28:20.480882   15333 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:28:20.481102   15333 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/false-150000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/false-150000/disk.qcow2
	I0819 11:28:20.490325   15333 main.go:141] libmachine: STDOUT: 
	I0819 11:28:20.490342   15333 main.go:141] libmachine: STDERR: 
	I0819 11:28:20.490393   15333 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/false-150000/disk.qcow2 +20000M
	I0819 11:28:20.498267   15333 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:28:20.498283   15333 main.go:141] libmachine: STDERR: 
	I0819 11:28:20.498299   15333 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/false-150000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/false-150000/disk.qcow2
	I0819 11:28:20.498303   15333 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:28:20.498316   15333 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:28:20.498343   15333 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/false-150000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/false-150000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/false-150000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:d6:58:71:78:ed -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/false-150000/disk.qcow2
	I0819 11:28:20.499973   15333 main.go:141] libmachine: STDOUT: 
	I0819 11:28:20.499990   15333 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:28:20.500002   15333 client.go:171] duration metric: took 210.63775ms to LocalClient.Create
	I0819 11:28:22.502209   15333 start.go:128] duration metric: took 2.267821417s to createHost
	I0819 11:28:22.502308   15333 start.go:83] releasing machines lock for "false-150000", held for 2.268282167s
	W0819 11:28:22.502698   15333 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-150000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-150000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:28:22.518405   15333 out.go:201] 
	W0819 11:28:22.521405   15333 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:28:22.521446   15333 out.go:270] * 
	* 
	W0819 11:28:22.523390   15333 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:28:22.535331   15333 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-150000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-150000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.762142167s)

                                                
                                                
-- stdout --
	* [kindnet-150000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-150000" primary control-plane node in "kindnet-150000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-150000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:28:24.757627   15448 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:28:24.757765   15448 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:28:24.757769   15448 out.go:358] Setting ErrFile to fd 2...
	I0819 11:28:24.757771   15448 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:28:24.757885   15448 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:28:24.758951   15448 out.go:352] Setting JSON to false
	I0819 11:28:24.775733   15448 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7071,"bootTime":1724085033,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:28:24.775803   15448 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:28:24.780843   15448 out.go:177] * [kindnet-150000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:28:24.788604   15448 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 11:28:24.788637   15448 notify.go:220] Checking for updates...
	I0819 11:28:24.795668   15448 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:28:24.798640   15448 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:28:24.801711   15448 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:28:24.804603   15448 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	I0819 11:28:24.807627   15448 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:28:24.811024   15448 config.go:182] Loaded profile config "multinode-540000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:28:24.811088   15448 config.go:182] Loaded profile config "stopped-upgrade-163000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 11:28:24.811128   15448 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:28:24.815562   15448 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:28:24.822622   15448 start.go:297] selected driver: qemu2
	I0819 11:28:24.822629   15448 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:28:24.822635   15448 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:28:24.824846   15448 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:28:24.827670   15448 out.go:177] * Automatically selected the socket_vmnet network
	I0819 11:28:24.830741   15448 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:28:24.830778   15448 cni.go:84] Creating CNI manager for "kindnet"
	I0819 11:28:24.830782   15448 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 11:28:24.830819   15448 start.go:340] cluster config:
	{Name:kindnet-150000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kindnet-150000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:28:24.834316   15448 iso.go:125] acquiring lock: {Name:mk1182fa87ba49f1e009b3ded77c456c9e9e8e4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:28:24.841629   15448 out.go:177] * Starting "kindnet-150000" primary control-plane node in "kindnet-150000" cluster
	I0819 11:28:24.845618   15448 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:28:24.845631   15448 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:28:24.845642   15448 cache.go:56] Caching tarball of preloaded images
	I0819 11:28:24.845701   15448 preload.go:172] Found /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:28:24.845706   15448 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:28:24.845759   15448 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/kindnet-150000/config.json ...
	I0819 11:28:24.845769   15448 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/kindnet-150000/config.json: {Name:mk38d019620ec7459b2592747d3f5a4e04a45928 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:28:24.845975   15448 start.go:360] acquireMachinesLock for kindnet-150000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:28:24.846005   15448 start.go:364] duration metric: took 25.042µs to acquireMachinesLock for "kindnet-150000"
	I0819 11:28:24.846017   15448 start.go:93] Provisioning new machine with config: &{Name:kindnet-150000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kindnet-150000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:28:24.846046   15448 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:28:24.853606   15448 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 11:28:24.869056   15448 start.go:159] libmachine.API.Create for "kindnet-150000" (driver="qemu2")
	I0819 11:28:24.869080   15448 client.go:168] LocalClient.Create starting
	I0819 11:28:24.869142   15448 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:28:24.869173   15448 main.go:141] libmachine: Decoding PEM data...
	I0819 11:28:24.869186   15448 main.go:141] libmachine: Parsing certificate...
	I0819 11:28:24.869221   15448 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:28:24.869243   15448 main.go:141] libmachine: Decoding PEM data...
	I0819 11:28:24.869251   15448 main.go:141] libmachine: Parsing certificate...
	I0819 11:28:24.869667   15448 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:28:25.018470   15448 main.go:141] libmachine: Creating SSH key...
	I0819 11:28:25.157839   15448 main.go:141] libmachine: Creating Disk image...
	I0819 11:28:25.157846   15448 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:28:25.158072   15448 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kindnet-150000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kindnet-150000/disk.qcow2
	I0819 11:28:25.167439   15448 main.go:141] libmachine: STDOUT: 
	I0819 11:28:25.167458   15448 main.go:141] libmachine: STDERR: 
	I0819 11:28:25.167509   15448 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kindnet-150000/disk.qcow2 +20000M
	I0819 11:28:25.175492   15448 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:28:25.175507   15448 main.go:141] libmachine: STDERR: 
	I0819 11:28:25.175519   15448 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kindnet-150000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kindnet-150000/disk.qcow2
	I0819 11:28:25.175534   15448 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:28:25.175546   15448 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:28:25.175578   15448 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kindnet-150000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kindnet-150000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kindnet-150000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:0f:59:60:56:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kindnet-150000/disk.qcow2
	I0819 11:28:25.177194   15448 main.go:141] libmachine: STDOUT: 
	I0819 11:28:25.177210   15448 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:28:25.177231   15448 client.go:171] duration metric: took 308.147042ms to LocalClient.Create
	I0819 11:28:27.179453   15448 start.go:128] duration metric: took 2.333381875s to createHost
	I0819 11:28:27.179529   15448 start.go:83] releasing machines lock for "kindnet-150000", held for 2.333523291s
	W0819 11:28:27.179639   15448 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:28:27.191750   15448 out.go:177] * Deleting "kindnet-150000" in qemu2 ...
	W0819 11:28:27.218506   15448 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:28:27.218532   15448 start.go:729] Will try again in 5 seconds ...
	I0819 11:28:32.220611   15448 start.go:360] acquireMachinesLock for kindnet-150000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:28:32.220703   15448 start.go:364] duration metric: took 79.25µs to acquireMachinesLock for "kindnet-150000"
	I0819 11:28:32.220717   15448 start.go:93] Provisioning new machine with config: &{Name:kindnet-150000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kindnet-150000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:28:32.220754   15448 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:28:32.225937   15448 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 11:28:32.242258   15448 start.go:159] libmachine.API.Create for "kindnet-150000" (driver="qemu2")
	I0819 11:28:32.242303   15448 client.go:168] LocalClient.Create starting
	I0819 11:28:32.242391   15448 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:28:32.242423   15448 main.go:141] libmachine: Decoding PEM data...
	I0819 11:28:32.242437   15448 main.go:141] libmachine: Parsing certificate...
	I0819 11:28:32.242474   15448 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:28:32.242499   15448 main.go:141] libmachine: Decoding PEM data...
	I0819 11:28:32.242506   15448 main.go:141] libmachine: Parsing certificate...
	I0819 11:28:32.242822   15448 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:28:32.390008   15448 main.go:141] libmachine: Creating SSH key...
	I0819 11:28:32.429757   15448 main.go:141] libmachine: Creating Disk image...
	I0819 11:28:32.429763   15448 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:28:32.429986   15448 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kindnet-150000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kindnet-150000/disk.qcow2
	I0819 11:28:32.439300   15448 main.go:141] libmachine: STDOUT: 
	I0819 11:28:32.439322   15448 main.go:141] libmachine: STDERR: 
	I0819 11:28:32.439365   15448 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kindnet-150000/disk.qcow2 +20000M
	I0819 11:28:32.447463   15448 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:28:32.447487   15448 main.go:141] libmachine: STDERR: 
	I0819 11:28:32.447499   15448 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kindnet-150000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kindnet-150000/disk.qcow2
	I0819 11:28:32.447505   15448 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:28:32.447527   15448 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:28:32.447556   15448 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kindnet-150000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kindnet-150000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kindnet-150000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:2a:de:49:d8:58 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kindnet-150000/disk.qcow2
	I0819 11:28:32.449243   15448 main.go:141] libmachine: STDOUT: 
	I0819 11:28:32.449265   15448 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:28:32.449278   15448 client.go:171] duration metric: took 206.970958ms to LocalClient.Create
	I0819 11:28:34.451488   15448 start.go:128] duration metric: took 2.23071475s to createHost
	I0819 11:28:34.451618   15448 start.go:83] releasing machines lock for "kindnet-150000", held for 2.230849042s
	W0819 11:28:34.451885   15448 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-150000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-150000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:28:34.463627   15448 out.go:201] 
	W0819 11:28:34.467638   15448 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:28:34.467676   15448 out.go:270] * 
	* 
	W0819 11:28:34.469425   15448 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:28:34.478646   15448 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-150000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-150000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.937462791s)

                                                
                                                
-- stdout --
	* [flannel-150000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-150000" primary control-plane node in "flannel-150000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-150000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:28:36.757959   15565 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:28:36.758085   15565 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:28:36.758090   15565 out.go:358] Setting ErrFile to fd 2...
	I0819 11:28:36.758093   15565 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:28:36.758213   15565 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:28:36.759343   15565 out.go:352] Setting JSON to false
	I0819 11:28:36.776709   15565 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7083,"bootTime":1724085033,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:28:36.776821   15565 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:28:36.781382   15565 out.go:177] * [flannel-150000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:28:36.788314   15565 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 11:28:36.788379   15565 notify.go:220] Checking for updates...
	I0819 11:28:36.794220   15565 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:28:36.797250   15565 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:28:36.800296   15565 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:28:36.803218   15565 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	I0819 11:28:36.806231   15565 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:28:36.809652   15565 config.go:182] Loaded profile config "multinode-540000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:28:36.809714   15565 config.go:182] Loaded profile config "stopped-upgrade-163000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 11:28:36.809768   15565 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:28:36.813109   15565 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:28:36.820276   15565 start.go:297] selected driver: qemu2
	I0819 11:28:36.820285   15565 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:28:36.820292   15565 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:28:36.822422   15565 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:28:36.823685   15565 out.go:177] * Automatically selected the socket_vmnet network
	I0819 11:28:36.827375   15565 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:28:36.827431   15565 cni.go:84] Creating CNI manager for "flannel"
	I0819 11:28:36.827436   15565 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0819 11:28:36.827466   15565 start.go:340] cluster config:
	{Name:flannel-150000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:flannel-150000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:28:36.830924   15565 iso.go:125] acquiring lock: {Name:mk1182fa87ba49f1e009b3ded77c456c9e9e8e4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:28:36.839234   15565 out.go:177] * Starting "flannel-150000" primary control-plane node in "flannel-150000" cluster
	I0819 11:28:36.843234   15565 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:28:36.843247   15565 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:28:36.843253   15565 cache.go:56] Caching tarball of preloaded images
	I0819 11:28:36.843309   15565 preload.go:172] Found /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:28:36.843314   15565 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:28:36.843368   15565 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/flannel-150000/config.json ...
	I0819 11:28:36.843378   15565 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/flannel-150000/config.json: {Name:mk6d2375307f6d3ccad41ada8006a5c0bd2f812d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:28:36.843673   15565 start.go:360] acquireMachinesLock for flannel-150000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:28:36.843703   15565 start.go:364] duration metric: took 25.041µs to acquireMachinesLock for "flannel-150000"
	I0819 11:28:36.843714   15565 start.go:93] Provisioning new machine with config: &{Name:flannel-150000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:flannel-150000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:28:36.843760   15565 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:28:36.852218   15565 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 11:28:36.867112   15565 start.go:159] libmachine.API.Create for "flannel-150000" (driver="qemu2")
	I0819 11:28:36.867133   15565 client.go:168] LocalClient.Create starting
	I0819 11:28:36.867199   15565 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:28:36.867228   15565 main.go:141] libmachine: Decoding PEM data...
	I0819 11:28:36.867238   15565 main.go:141] libmachine: Parsing certificate...
	I0819 11:28:36.867273   15565 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:28:36.867295   15565 main.go:141] libmachine: Decoding PEM data...
	I0819 11:28:36.867304   15565 main.go:141] libmachine: Parsing certificate...
	I0819 11:28:36.867637   15565 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:28:37.015201   15565 main.go:141] libmachine: Creating SSH key...
	I0819 11:28:37.158301   15565 main.go:141] libmachine: Creating Disk image...
	I0819 11:28:37.158309   15565 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:28:37.158531   15565 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/flannel-150000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/flannel-150000/disk.qcow2
	I0819 11:28:37.167841   15565 main.go:141] libmachine: STDOUT: 
	I0819 11:28:37.167858   15565 main.go:141] libmachine: STDERR: 
	I0819 11:28:37.167907   15565 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/flannel-150000/disk.qcow2 +20000M
	I0819 11:28:37.175907   15565 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:28:37.175933   15565 main.go:141] libmachine: STDERR: 
	I0819 11:28:37.175950   15565 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/flannel-150000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/flannel-150000/disk.qcow2
	I0819 11:28:37.175955   15565 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:28:37.175967   15565 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:28:37.175990   15565 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/flannel-150000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/flannel-150000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/flannel-150000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:b0:5d:21:54:65 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/flannel-150000/disk.qcow2
	I0819 11:28:37.177623   15565 main.go:141] libmachine: STDOUT: 
	I0819 11:28:37.177638   15565 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:28:37.177657   15565 client.go:171] duration metric: took 310.52125ms to LocalClient.Create
	I0819 11:28:39.179859   15565 start.go:128] duration metric: took 2.336078209s to createHost
	I0819 11:28:39.179938   15565 start.go:83] releasing machines lock for "flannel-150000", held for 2.336236042s
	W0819 11:28:39.180094   15565 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:28:39.195517   15565 out.go:177] * Deleting "flannel-150000" in qemu2 ...
	W0819 11:28:39.224010   15565 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:28:39.224040   15565 start.go:729] Will try again in 5 seconds ...
	I0819 11:28:44.226250   15565 start.go:360] acquireMachinesLock for flannel-150000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:28:44.226728   15565 start.go:364] duration metric: took 391.917µs to acquireMachinesLock for "flannel-150000"
	I0819 11:28:44.226857   15565 start.go:93] Provisioning new machine with config: &{Name:flannel-150000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:flannel-150000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:28:44.227185   15565 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:28:44.233042   15565 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 11:28:44.282579   15565 start.go:159] libmachine.API.Create for "flannel-150000" (driver="qemu2")
	I0819 11:28:44.282635   15565 client.go:168] LocalClient.Create starting
	I0819 11:28:44.282762   15565 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:28:44.282840   15565 main.go:141] libmachine: Decoding PEM data...
	I0819 11:28:44.282857   15565 main.go:141] libmachine: Parsing certificate...
	I0819 11:28:44.282929   15565 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:28:44.282982   15565 main.go:141] libmachine: Decoding PEM data...
	I0819 11:28:44.282996   15565 main.go:141] libmachine: Parsing certificate...
	I0819 11:28:44.283682   15565 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:28:44.441302   15565 main.go:141] libmachine: Creating SSH key...
	I0819 11:28:44.599777   15565 main.go:141] libmachine: Creating Disk image...
	I0819 11:28:44.599787   15565 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:28:44.600029   15565 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/flannel-150000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/flannel-150000/disk.qcow2
	I0819 11:28:44.609577   15565 main.go:141] libmachine: STDOUT: 
	I0819 11:28:44.609597   15565 main.go:141] libmachine: STDERR: 
	I0819 11:28:44.609647   15565 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/flannel-150000/disk.qcow2 +20000M
	I0819 11:28:44.617663   15565 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:28:44.617689   15565 main.go:141] libmachine: STDERR: 
	I0819 11:28:44.617702   15565 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/flannel-150000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/flannel-150000/disk.qcow2
	I0819 11:28:44.617707   15565 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:28:44.617718   15565 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:28:44.617743   15565 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/flannel-150000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/flannel-150000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/flannel-150000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:08:0a:a9:31:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/flannel-150000/disk.qcow2
	I0819 11:28:44.619382   15565 main.go:141] libmachine: STDOUT: 
	I0819 11:28:44.619398   15565 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:28:44.619411   15565 client.go:171] duration metric: took 336.77175ms to LocalClient.Create
	I0819 11:28:46.621622   15565 start.go:128] duration metric: took 2.394409792s to createHost
	I0819 11:28:46.621699   15565 start.go:83] releasing machines lock for "flannel-150000", held for 2.394957417s
	W0819 11:28:46.622087   15565 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-150000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-150000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:28:46.637664   15565 out.go:201] 
	W0819 11:28:46.641701   15565 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:28:46.641729   15565 out.go:270] * 
	* 
	W0819 11:28:46.644138   15565 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:28:46.653640   15565 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-150000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-150000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.846948625s)

                                                
                                                
-- stdout --
	* [enable-default-cni-150000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-150000" primary control-plane node in "enable-default-cni-150000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-150000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:28:49.087449   15689 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:28:49.087597   15689 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:28:49.087601   15689 out.go:358] Setting ErrFile to fd 2...
	I0819 11:28:49.087603   15689 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:28:49.087740   15689 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:28:49.088784   15689 out.go:352] Setting JSON to false
	I0819 11:28:49.105436   15689 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7096,"bootTime":1724085033,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:28:49.105519   15689 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:28:49.111393   15689 out.go:177] * [enable-default-cni-150000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:28:49.119317   15689 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 11:28:49.119332   15689 notify.go:220] Checking for updates...
	I0819 11:28:49.126383   15689 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:28:49.129443   15689 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:28:49.132401   15689 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:28:49.135362   15689 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	I0819 11:28:49.138393   15689 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:28:49.141715   15689 config.go:182] Loaded profile config "multinode-540000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:28:49.141779   15689 config.go:182] Loaded profile config "stopped-upgrade-163000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 11:28:49.141827   15689 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:28:49.146401   15689 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:28:49.153380   15689 start.go:297] selected driver: qemu2
	I0819 11:28:49.153387   15689 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:28:49.153396   15689 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:28:49.155742   15689 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:28:49.160359   15689 out.go:177] * Automatically selected the socket_vmnet network
	E0819 11:28:49.163372   15689 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0819 11:28:49.163385   15689 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:28:49.163423   15689 cni.go:84] Creating CNI manager for "bridge"
	I0819 11:28:49.163428   15689 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 11:28:49.163464   15689 start.go:340] cluster config:
	{Name:enable-default-cni-150000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-150000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:28:49.167295   15689 iso.go:125] acquiring lock: {Name:mk1182fa87ba49f1e009b3ded77c456c9e9e8e4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:28:49.174205   15689 out.go:177] * Starting "enable-default-cni-150000" primary control-plane node in "enable-default-cni-150000" cluster
	I0819 11:28:49.178315   15689 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:28:49.178327   15689 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:28:49.178336   15689 cache.go:56] Caching tarball of preloaded images
	I0819 11:28:49.178389   15689 preload.go:172] Found /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:28:49.178394   15689 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:28:49.178455   15689 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/enable-default-cni-150000/config.json ...
	I0819 11:28:49.178466   15689 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/enable-default-cni-150000/config.json: {Name:mk69078e9960f6734f4101e12a79ab9c5541a6f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:28:49.178813   15689 start.go:360] acquireMachinesLock for enable-default-cni-150000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:28:49.178848   15689 start.go:364] duration metric: took 28.458µs to acquireMachinesLock for "enable-default-cni-150000"
	I0819 11:28:49.178861   15689 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-150000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-150000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:28:49.178891   15689 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:28:49.186321   15689 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 11:28:49.203772   15689 start.go:159] libmachine.API.Create for "enable-default-cni-150000" (driver="qemu2")
	I0819 11:28:49.203801   15689 client.go:168] LocalClient.Create starting
	I0819 11:28:49.203870   15689 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:28:49.203906   15689 main.go:141] libmachine: Decoding PEM data...
	I0819 11:28:49.203914   15689 main.go:141] libmachine: Parsing certificate...
	I0819 11:28:49.203954   15689 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:28:49.203978   15689 main.go:141] libmachine: Decoding PEM data...
	I0819 11:28:49.203983   15689 main.go:141] libmachine: Parsing certificate...
	I0819 11:28:49.204396   15689 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:28:49.352726   15689 main.go:141] libmachine: Creating SSH key...
	I0819 11:28:49.439449   15689 main.go:141] libmachine: Creating Disk image...
	I0819 11:28:49.439454   15689 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:28:49.439693   15689 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/enable-default-cni-150000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/enable-default-cni-150000/disk.qcow2
	I0819 11:28:49.449021   15689 main.go:141] libmachine: STDOUT: 
	I0819 11:28:49.449036   15689 main.go:141] libmachine: STDERR: 
	I0819 11:28:49.449093   15689 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/enable-default-cni-150000/disk.qcow2 +20000M
	I0819 11:28:49.457112   15689 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:28:49.457128   15689 main.go:141] libmachine: STDERR: 
	I0819 11:28:49.457144   15689 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/enable-default-cni-150000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/enable-default-cni-150000/disk.qcow2
	I0819 11:28:49.457150   15689 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:28:49.457161   15689 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:28:49.457183   15689 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/enable-default-cni-150000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/enable-default-cni-150000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/enable-default-cni-150000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:4e:f0:eb:d9:91 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/enable-default-cni-150000/disk.qcow2
	I0819 11:28:49.458862   15689 main.go:141] libmachine: STDOUT: 
	I0819 11:28:49.458875   15689 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:28:49.458891   15689 client.go:171] duration metric: took 255.086375ms to LocalClient.Create
	I0819 11:28:51.461095   15689 start.go:128] duration metric: took 2.282185625s to createHost
	I0819 11:28:51.461176   15689 start.go:83] releasing machines lock for "enable-default-cni-150000", held for 2.282328292s
	W0819 11:28:51.461276   15689 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:28:51.473610   15689 out.go:177] * Deleting "enable-default-cni-150000" in qemu2 ...
	W0819 11:28:51.504674   15689 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:28:51.504801   15689 start.go:729] Will try again in 5 seconds ...
	I0819 11:28:56.507020   15689 start.go:360] acquireMachinesLock for enable-default-cni-150000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:28:56.507462   15689 start.go:364] duration metric: took 341.25µs to acquireMachinesLock for "enable-default-cni-150000"
	I0819 11:28:56.507509   15689 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-150000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-150000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:28:56.507686   15689 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:28:56.513718   15689 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 11:28:56.551490   15689 start.go:159] libmachine.API.Create for "enable-default-cni-150000" (driver="qemu2")
	I0819 11:28:56.551542   15689 client.go:168] LocalClient.Create starting
	I0819 11:28:56.551669   15689 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:28:56.551731   15689 main.go:141] libmachine: Decoding PEM data...
	I0819 11:28:56.551744   15689 main.go:141] libmachine: Parsing certificate...
	I0819 11:28:56.551807   15689 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:28:56.551847   15689 main.go:141] libmachine: Decoding PEM data...
	I0819 11:28:56.551858   15689 main.go:141] libmachine: Parsing certificate...
	I0819 11:28:56.552367   15689 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:28:56.710424   15689 main.go:141] libmachine: Creating SSH key...
	I0819 11:28:56.849843   15689 main.go:141] libmachine: Creating Disk image...
	I0819 11:28:56.849851   15689 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:28:56.850079   15689 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/enable-default-cni-150000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/enable-default-cni-150000/disk.qcow2
	I0819 11:28:56.859694   15689 main.go:141] libmachine: STDOUT: 
	I0819 11:28:56.859716   15689 main.go:141] libmachine: STDERR: 
	I0819 11:28:56.859762   15689 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/enable-default-cni-150000/disk.qcow2 +20000M
	I0819 11:28:56.867858   15689 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:28:56.867884   15689 main.go:141] libmachine: STDERR: 
	I0819 11:28:56.867897   15689 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/enable-default-cni-150000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/enable-default-cni-150000/disk.qcow2
	I0819 11:28:56.867904   15689 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:28:56.867913   15689 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:28:56.867938   15689 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/enable-default-cni-150000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/enable-default-cni-150000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/enable-default-cni-150000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:cf:1a:50:db:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/enable-default-cni-150000/disk.qcow2
	I0819 11:28:56.869581   15689 main.go:141] libmachine: STDOUT: 
	I0819 11:28:56.869596   15689 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:28:56.869608   15689 client.go:171] duration metric: took 318.063125ms to LocalClient.Create
	I0819 11:28:58.871209   15689 start.go:128] duration metric: took 2.363518833s to createHost
	I0819 11:28:58.871243   15689 start.go:83] releasing machines lock for "enable-default-cni-150000", held for 2.363775959s
	W0819 11:28:58.871410   15689 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-150000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-150000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:28:58.882868   15689 out.go:201] 
	W0819 11:28:58.885927   15689 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:28:58.885949   15689 out.go:270] * 
	* 
	W0819 11:28:58.886763   15689 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:28:58.896745   15689 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-150000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-150000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.855587958s)

                                                
                                                
-- stdout --
	* [bridge-150000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-150000" primary control-plane node in "bridge-150000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-150000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:29:01.079145   15803 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:29:01.079282   15803 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:29:01.079285   15803 out.go:358] Setting ErrFile to fd 2...
	I0819 11:29:01.079287   15803 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:29:01.079418   15803 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:29:01.080420   15803 out.go:352] Setting JSON to false
	I0819 11:29:01.096851   15803 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7108,"bootTime":1724085033,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:29:01.096921   15803 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:29:01.103364   15803 out.go:177] * [bridge-150000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:29:01.111442   15803 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 11:29:01.111483   15803 notify.go:220] Checking for updates...
	I0819 11:29:01.118363   15803 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:29:01.121400   15803 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:29:01.124301   15803 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:29:01.127385   15803 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	I0819 11:29:01.130362   15803 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:29:01.133628   15803 config.go:182] Loaded profile config "multinode-540000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:29:01.133697   15803 config.go:182] Loaded profile config "stopped-upgrade-163000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 11:29:01.133741   15803 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:29:01.138276   15803 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:29:01.145320   15803 start.go:297] selected driver: qemu2
	I0819 11:29:01.145327   15803 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:29:01.145333   15803 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:29:01.147496   15803 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:29:01.150416   15803 out.go:177] * Automatically selected the socket_vmnet network
	I0819 11:29:01.153390   15803 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:29:01.153407   15803 cni.go:84] Creating CNI manager for "bridge"
	I0819 11:29:01.153410   15803 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 11:29:01.153436   15803 start.go:340] cluster config:
	{Name:bridge-150000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:bridge-150000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:29:01.156992   15803 iso.go:125] acquiring lock: {Name:mk1182fa87ba49f1e009b3ded77c456c9e9e8e4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:29:01.164330   15803 out.go:177] * Starting "bridge-150000" primary control-plane node in "bridge-150000" cluster
	I0819 11:29:01.168378   15803 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:29:01.168404   15803 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:29:01.168414   15803 cache.go:56] Caching tarball of preloaded images
	I0819 11:29:01.168504   15803 preload.go:172] Found /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:29:01.168510   15803 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:29:01.168589   15803 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/bridge-150000/config.json ...
	I0819 11:29:01.168600   15803 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/bridge-150000/config.json: {Name:mkfadba209bc343663ce4590c993c4dff19b522a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:29:01.168826   15803 start.go:360] acquireMachinesLock for bridge-150000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:29:01.168859   15803 start.go:364] duration metric: took 27.125µs to acquireMachinesLock for "bridge-150000"
	I0819 11:29:01.168871   15803 start.go:93] Provisioning new machine with config: &{Name:bridge-150000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:bridge-150000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:29:01.168908   15803 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:29:01.177339   15803 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 11:29:01.194052   15803 start.go:159] libmachine.API.Create for "bridge-150000" (driver="qemu2")
	I0819 11:29:01.194074   15803 client.go:168] LocalClient.Create starting
	I0819 11:29:01.194131   15803 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:29:01.194162   15803 main.go:141] libmachine: Decoding PEM data...
	I0819 11:29:01.194171   15803 main.go:141] libmachine: Parsing certificate...
	I0819 11:29:01.194205   15803 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:29:01.194227   15803 main.go:141] libmachine: Decoding PEM data...
	I0819 11:29:01.194241   15803 main.go:141] libmachine: Parsing certificate...
	I0819 11:29:01.194566   15803 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:29:01.342742   15803 main.go:141] libmachine: Creating SSH key...
	I0819 11:29:01.472267   15803 main.go:141] libmachine: Creating Disk image...
	I0819 11:29:01.472274   15803 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:29:01.472495   15803 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/bridge-150000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/bridge-150000/disk.qcow2
	I0819 11:29:01.481538   15803 main.go:141] libmachine: STDOUT: 
	I0819 11:29:01.481560   15803 main.go:141] libmachine: STDERR: 
	I0819 11:29:01.481603   15803 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/bridge-150000/disk.qcow2 +20000M
	I0819 11:29:01.489489   15803 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:29:01.489508   15803 main.go:141] libmachine: STDERR: 
	I0819 11:29:01.489526   15803 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/bridge-150000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/bridge-150000/disk.qcow2
	I0819 11:29:01.489531   15803 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:29:01.489544   15803 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:29:01.489580   15803 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/bridge-150000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/bridge-150000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/bridge-150000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:7c:4b:e7:07:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/bridge-150000/disk.qcow2
	I0819 11:29:01.491310   15803 main.go:141] libmachine: STDOUT: 
	I0819 11:29:01.491325   15803 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:29:01.491346   15803 client.go:171] duration metric: took 297.267166ms to LocalClient.Create
	I0819 11:29:03.493405   15803 start.go:128] duration metric: took 2.324497583s to createHost
	I0819 11:29:03.493426   15803 start.go:83] releasing machines lock for "bridge-150000", held for 2.324573041s
	W0819 11:29:03.493438   15803 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:29:03.501963   15803 out.go:177] * Deleting "bridge-150000" in qemu2 ...
	W0819 11:29:03.515143   15803 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:29:03.515152   15803 start.go:729] Will try again in 5 seconds ...
	I0819 11:29:08.517352   15803 start.go:360] acquireMachinesLock for bridge-150000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:29:08.517912   15803 start.go:364] duration metric: took 450.833µs to acquireMachinesLock for "bridge-150000"
	I0819 11:29:08.518031   15803 start.go:93] Provisioning new machine with config: &{Name:bridge-150000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:bridge-150000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:29:08.518247   15803 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:29:08.523769   15803 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 11:29:08.569412   15803 start.go:159] libmachine.API.Create for "bridge-150000" (driver="qemu2")
	I0819 11:29:08.569465   15803 client.go:168] LocalClient.Create starting
	I0819 11:29:08.569583   15803 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:29:08.569647   15803 main.go:141] libmachine: Decoding PEM data...
	I0819 11:29:08.569675   15803 main.go:141] libmachine: Parsing certificate...
	I0819 11:29:08.569734   15803 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:29:08.569779   15803 main.go:141] libmachine: Decoding PEM data...
	I0819 11:29:08.569795   15803 main.go:141] libmachine: Parsing certificate...
	I0819 11:29:08.570362   15803 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:29:08.793070   15803 main.go:141] libmachine: Creating SSH key...
	I0819 11:29:08.848065   15803 main.go:141] libmachine: Creating Disk image...
	I0819 11:29:08.848073   15803 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:29:08.848299   15803 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/bridge-150000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/bridge-150000/disk.qcow2
	I0819 11:29:08.857698   15803 main.go:141] libmachine: STDOUT: 
	I0819 11:29:08.857718   15803 main.go:141] libmachine: STDERR: 
	I0819 11:29:08.857781   15803 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/bridge-150000/disk.qcow2 +20000M
	I0819 11:29:08.865878   15803 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:29:08.865893   15803 main.go:141] libmachine: STDERR: 
	I0819 11:29:08.865914   15803 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/bridge-150000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/bridge-150000/disk.qcow2
	I0819 11:29:08.865918   15803 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:29:08.865933   15803 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:29:08.865963   15803 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/bridge-150000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/bridge-150000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/bridge-150000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:16:fc:a5:70:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/bridge-150000/disk.qcow2
	I0819 11:29:08.867567   15803 main.go:141] libmachine: STDOUT: 
	I0819 11:29:08.867583   15803 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:29:08.867593   15803 client.go:171] duration metric: took 298.124416ms to LocalClient.Create
	I0819 11:29:10.869779   15803 start.go:128] duration metric: took 2.351513166s to createHost
	I0819 11:29:10.869860   15803 start.go:83] releasing machines lock for "bridge-150000", held for 2.351902167s
	W0819 11:29:10.870175   15803 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-150000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-150000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:29:10.881676   15803 out.go:201] 
	W0819 11:29:10.886663   15803 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:29:10.886695   15803 out.go:270] * 
	* 
	W0819 11:29:10.888399   15803 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:29:10.896476   15803 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-150000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-150000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.844330708s)

                                                
                                                
-- stdout --
	* [kubenet-150000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-150000" primary control-plane node in "kubenet-150000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-150000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:29:13.115775   15917 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:29:13.115894   15917 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:29:13.115897   15917 out.go:358] Setting ErrFile to fd 2...
	I0819 11:29:13.115899   15917 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:29:13.116041   15917 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:29:13.117113   15917 out.go:352] Setting JSON to false
	I0819 11:29:13.133367   15917 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7120,"bootTime":1724085033,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:29:13.133436   15917 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:29:13.140797   15917 out.go:177] * [kubenet-150000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:29:13.148761   15917 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 11:29:13.148864   15917 notify.go:220] Checking for updates...
	I0819 11:29:13.154671   15917 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:29:13.157684   15917 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:29:13.160762   15917 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:29:13.162226   15917 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	I0819 11:29:13.165671   15917 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:29:13.169090   15917 config.go:182] Loaded profile config "multinode-540000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:29:13.169154   15917 config.go:182] Loaded profile config "stopped-upgrade-163000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 11:29:13.169196   15917 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:29:13.173622   15917 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:29:13.180685   15917 start.go:297] selected driver: qemu2
	I0819 11:29:13.180691   15917 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:29:13.180696   15917 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:29:13.183038   15917 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:29:13.186739   15917 out.go:177] * Automatically selected the socket_vmnet network
	I0819 11:29:13.189786   15917 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:29:13.189806   15917 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0819 11:29:13.189838   15917 start.go:340] cluster config:
	{Name:kubenet-150000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubenet-150000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:29:13.193783   15917 iso.go:125] acquiring lock: {Name:mk1182fa87ba49f1e009b3ded77c456c9e9e8e4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:29:13.199663   15917 out.go:177] * Starting "kubenet-150000" primary control-plane node in "kubenet-150000" cluster
	I0819 11:29:13.203650   15917 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:29:13.203663   15917 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:29:13.203672   15917 cache.go:56] Caching tarball of preloaded images
	I0819 11:29:13.203728   15917 preload.go:172] Found /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:29:13.203733   15917 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:29:13.203797   15917 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/kubenet-150000/config.json ...
	I0819 11:29:13.203807   15917 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/kubenet-150000/config.json: {Name:mk13a09f2813f0d06c4c96af933a3e287a91c63b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:29:13.204156   15917 start.go:360] acquireMachinesLock for kubenet-150000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:29:13.204189   15917 start.go:364] duration metric: took 27µs to acquireMachinesLock for "kubenet-150000"
	I0819 11:29:13.204201   15917 start.go:93] Provisioning new machine with config: &{Name:kubenet-150000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kubenet-150000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:29:13.204231   15917 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:29:13.212658   15917 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 11:29:13.229026   15917 start.go:159] libmachine.API.Create for "kubenet-150000" (driver="qemu2")
	I0819 11:29:13.229062   15917 client.go:168] LocalClient.Create starting
	I0819 11:29:13.229129   15917 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:29:13.229162   15917 main.go:141] libmachine: Decoding PEM data...
	I0819 11:29:13.229171   15917 main.go:141] libmachine: Parsing certificate...
	I0819 11:29:13.229204   15917 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:29:13.229226   15917 main.go:141] libmachine: Decoding PEM data...
	I0819 11:29:13.229235   15917 main.go:141] libmachine: Parsing certificate...
	I0819 11:29:13.229583   15917 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:29:13.376519   15917 main.go:141] libmachine: Creating SSH key...
	I0819 11:29:13.503763   15917 main.go:141] libmachine: Creating Disk image...
	I0819 11:29:13.503773   15917 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:29:13.503992   15917 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kubenet-150000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kubenet-150000/disk.qcow2
	I0819 11:29:13.513285   15917 main.go:141] libmachine: STDOUT: 
	I0819 11:29:13.513306   15917 main.go:141] libmachine: STDERR: 
	I0819 11:29:13.513367   15917 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kubenet-150000/disk.qcow2 +20000M
	I0819 11:29:13.521478   15917 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:29:13.521498   15917 main.go:141] libmachine: STDERR: 
	I0819 11:29:13.521515   15917 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kubenet-150000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kubenet-150000/disk.qcow2
	I0819 11:29:13.521521   15917 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:29:13.521530   15917 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:29:13.521571   15917 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kubenet-150000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kubenet-150000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kubenet-150000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:c5:e0:d4:17:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kubenet-150000/disk.qcow2
	I0819 11:29:13.523562   15917 main.go:141] libmachine: STDOUT: 
	I0819 11:29:13.523579   15917 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:29:13.523609   15917 client.go:171] duration metric: took 294.543834ms to LocalClient.Create
	I0819 11:29:15.525945   15917 start.go:128] duration metric: took 2.32168375s to createHost
	I0819 11:29:15.526046   15917 start.go:83] releasing machines lock for "kubenet-150000", held for 2.321855s
	W0819 11:29:15.526101   15917 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:29:15.537345   15917 out.go:177] * Deleting "kubenet-150000" in qemu2 ...
	W0819 11:29:15.562236   15917 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:29:15.562270   15917 start.go:729] Will try again in 5 seconds ...
	I0819 11:29:20.564454   15917 start.go:360] acquireMachinesLock for kubenet-150000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:29:20.565009   15917 start.go:364] duration metric: took 460.25µs to acquireMachinesLock for "kubenet-150000"
	I0819 11:29:20.565175   15917 start.go:93] Provisioning new machine with config: &{Name:kubenet-150000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kubenet-150000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:29:20.565490   15917 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:29:20.574175   15917 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 11:29:20.623601   15917 start.go:159] libmachine.API.Create for "kubenet-150000" (driver="qemu2")
	I0819 11:29:20.623656   15917 client.go:168] LocalClient.Create starting
	I0819 11:29:20.623793   15917 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:29:20.623864   15917 main.go:141] libmachine: Decoding PEM data...
	I0819 11:29:20.623884   15917 main.go:141] libmachine: Parsing certificate...
	I0819 11:29:20.623942   15917 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:29:20.623989   15917 main.go:141] libmachine: Decoding PEM data...
	I0819 11:29:20.624005   15917 main.go:141] libmachine: Parsing certificate...
	I0819 11:29:20.624500   15917 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:29:20.782050   15917 main.go:141] libmachine: Creating SSH key...
	I0819 11:29:20.870105   15917 main.go:141] libmachine: Creating Disk image...
	I0819 11:29:20.870112   15917 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:29:20.870343   15917 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kubenet-150000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kubenet-150000/disk.qcow2
	I0819 11:29:20.879919   15917 main.go:141] libmachine: STDOUT: 
	I0819 11:29:20.879939   15917 main.go:141] libmachine: STDERR: 
	I0819 11:29:20.880009   15917 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kubenet-150000/disk.qcow2 +20000M
	I0819 11:29:20.888057   15917 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:29:20.888070   15917 main.go:141] libmachine: STDERR: 
	I0819 11:29:20.888078   15917 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kubenet-150000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kubenet-150000/disk.qcow2
	I0819 11:29:20.888094   15917 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:29:20.888102   15917 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:29:20.888129   15917 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kubenet-150000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kubenet-150000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kubenet-150000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:6d:16:f3:7c:c9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/kubenet-150000/disk.qcow2
	I0819 11:29:20.889928   15917 main.go:141] libmachine: STDOUT: 
	I0819 11:29:20.889945   15917 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:29:20.889961   15917 client.go:171] duration metric: took 266.302ms to LocalClient.Create
	I0819 11:29:22.892119   15917 start.go:128] duration metric: took 2.326605209s to createHost
	I0819 11:29:22.892176   15917 start.go:83] releasing machines lock for "kubenet-150000", held for 2.327155458s
	W0819 11:29:22.892499   15917 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-150000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-150000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:29:22.903780   15917 out.go:201] 
	W0819 11:29:22.907072   15917 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:29:22.907087   15917 out.go:270] * 
	* 
	W0819 11:29:22.908899   15917 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:29:22.919960   15917 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-150000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-150000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.843656875s)

                                                
                                                
-- stdout --
	* [calico-150000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-150000" primary control-plane node in "calico-150000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-150000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:29:25.113030   16030 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:29:25.113170   16030 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:29:25.113174   16030 out.go:358] Setting ErrFile to fd 2...
	I0819 11:29:25.113176   16030 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:29:25.113291   16030 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:29:25.114316   16030 out.go:352] Setting JSON to false
	I0819 11:29:25.130823   16030 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7132,"bootTime":1724085033,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:29:25.130893   16030 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:29:25.137279   16030 out.go:177] * [calico-150000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:29:25.141149   16030 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 11:29:25.141186   16030 notify.go:220] Checking for updates...
	I0819 11:29:25.148097   16030 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:29:25.151155   16030 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:29:25.154213   16030 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:29:25.155639   16030 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	I0819 11:29:25.159178   16030 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:29:25.162546   16030 config.go:182] Loaded profile config "multinode-540000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:29:25.162611   16030 config.go:182] Loaded profile config "stopped-upgrade-163000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 11:29:25.162659   16030 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:29:25.167059   16030 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:29:25.174157   16030 start.go:297] selected driver: qemu2
	I0819 11:29:25.174163   16030 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:29:25.174169   16030 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:29:25.176258   16030 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:29:25.179228   16030 out.go:177] * Automatically selected the socket_vmnet network
	I0819 11:29:25.182322   16030 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:29:25.182346   16030 cni.go:84] Creating CNI manager for "calico"
	I0819 11:29:25.182363   16030 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0819 11:29:25.182399   16030 start.go:340] cluster config:
	{Name:calico-150000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:calico-150000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:29:25.185693   16030 iso.go:125] acquiring lock: {Name:mk1182fa87ba49f1e009b3ded77c456c9e9e8e4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:29:25.192112   16030 out.go:177] * Starting "calico-150000" primary control-plane node in "calico-150000" cluster
	I0819 11:29:25.196227   16030 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:29:25.196242   16030 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:29:25.196253   16030 cache.go:56] Caching tarball of preloaded images
	I0819 11:29:25.196314   16030 preload.go:172] Found /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:29:25.196320   16030 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:29:25.196382   16030 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/calico-150000/config.json ...
	I0819 11:29:25.196392   16030 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/calico-150000/config.json: {Name:mkd36c79b44088d55820e2679733e034c00239a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:29:25.196709   16030 start.go:360] acquireMachinesLock for calico-150000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:29:25.196737   16030 start.go:364] duration metric: took 23.958µs to acquireMachinesLock for "calico-150000"
	I0819 11:29:25.196748   16030 start.go:93] Provisioning new machine with config: &{Name:calico-150000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:calico-150000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:29:25.196776   16030 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:29:25.201188   16030 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 11:29:25.216323   16030 start.go:159] libmachine.API.Create for "calico-150000" (driver="qemu2")
	I0819 11:29:25.216347   16030 client.go:168] LocalClient.Create starting
	I0819 11:29:25.216406   16030 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:29:25.216464   16030 main.go:141] libmachine: Decoding PEM data...
	I0819 11:29:25.216473   16030 main.go:141] libmachine: Parsing certificate...
	I0819 11:29:25.216496   16030 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:29:25.216517   16030 main.go:141] libmachine: Decoding PEM data...
	I0819 11:29:25.216528   16030 main.go:141] libmachine: Parsing certificate...
	I0819 11:29:25.216922   16030 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:29:25.366414   16030 main.go:141] libmachine: Creating SSH key...
	I0819 11:29:25.465452   16030 main.go:141] libmachine: Creating Disk image...
	I0819 11:29:25.465458   16030 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:29:25.465694   16030 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/calico-150000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/calico-150000/disk.qcow2
	I0819 11:29:25.474954   16030 main.go:141] libmachine: STDOUT: 
	I0819 11:29:25.474972   16030 main.go:141] libmachine: STDERR: 
	I0819 11:29:25.475037   16030 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/calico-150000/disk.qcow2 +20000M
	I0819 11:29:25.483008   16030 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:29:25.483023   16030 main.go:141] libmachine: STDERR: 
	I0819 11:29:25.483037   16030 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/calico-150000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/calico-150000/disk.qcow2
	I0819 11:29:25.483041   16030 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:29:25.483053   16030 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:29:25.483079   16030 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/calico-150000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/calico-150000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/calico-150000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:99:39:90:50:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/calico-150000/disk.qcow2
	I0819 11:29:25.484735   16030 main.go:141] libmachine: STDOUT: 
	I0819 11:29:25.484752   16030 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:29:25.484771   16030 client.go:171] duration metric: took 268.420417ms to LocalClient.Create
	I0819 11:29:27.486932   16030 start.go:128] duration metric: took 2.290145542s to createHost
	I0819 11:29:27.486956   16030 start.go:83] releasing machines lock for "calico-150000", held for 2.290225917s
	W0819 11:29:27.486977   16030 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:29:27.500616   16030 out.go:177] * Deleting "calico-150000" in qemu2 ...
	W0819 11:29:27.512972   16030 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:29:27.512980   16030 start.go:729] Will try again in 5 seconds ...
	I0819 11:29:32.515136   16030 start.go:360] acquireMachinesLock for calico-150000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:29:32.515336   16030 start.go:364] duration metric: took 140.625µs to acquireMachinesLock for "calico-150000"
	I0819 11:29:32.515366   16030 start.go:93] Provisioning new machine with config: &{Name:calico-150000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:calico-150000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:29:32.515444   16030 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:29:32.525654   16030 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 11:29:32.551625   16030 start.go:159] libmachine.API.Create for "calico-150000" (driver="qemu2")
	I0819 11:29:32.551665   16030 client.go:168] LocalClient.Create starting
	I0819 11:29:32.551747   16030 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:29:32.551794   16030 main.go:141] libmachine: Decoding PEM data...
	I0819 11:29:32.551807   16030 main.go:141] libmachine: Parsing certificate...
	I0819 11:29:32.551852   16030 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:29:32.551882   16030 main.go:141] libmachine: Decoding PEM data...
	I0819 11:29:32.551890   16030 main.go:141] libmachine: Parsing certificate...
	I0819 11:29:32.552446   16030 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:29:32.703064   16030 main.go:141] libmachine: Creating SSH key...
	I0819 11:29:32.864967   16030 main.go:141] libmachine: Creating Disk image...
	I0819 11:29:32.864982   16030 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:29:32.865221   16030 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/calico-150000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/calico-150000/disk.qcow2
	I0819 11:29:32.874819   16030 main.go:141] libmachine: STDOUT: 
	I0819 11:29:32.874839   16030 main.go:141] libmachine: STDERR: 
	I0819 11:29:32.874887   16030 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/calico-150000/disk.qcow2 +20000M
	I0819 11:29:32.883231   16030 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:29:32.883246   16030 main.go:141] libmachine: STDERR: 
	I0819 11:29:32.883258   16030 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/calico-150000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/calico-150000/disk.qcow2
	I0819 11:29:32.883262   16030 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:29:32.883274   16030 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:29:32.883313   16030 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/calico-150000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/calico-150000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/calico-150000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:75:bf:17:c5:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/calico-150000/disk.qcow2
	I0819 11:29:32.884991   16030 main.go:141] libmachine: STDOUT: 
	I0819 11:29:32.885006   16030 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:29:32.885029   16030 client.go:171] duration metric: took 333.360834ms to LocalClient.Create
	I0819 11:29:34.885905   16030 start.go:128] duration metric: took 2.370463625s to createHost
	I0819 11:29:34.885939   16030 start.go:83] releasing machines lock for "calico-150000", held for 2.370594583s
	W0819 11:29:34.886008   16030 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-150000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-150000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:29:34.894438   16030 out.go:201] 
	W0819 11:29:34.900601   16030 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:29:34.900610   16030 out.go:270] * 
	* 
	W0819 11:29:34.901052   16030 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:29:34.912578   16030 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-545000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-545000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (10.171462375s)

                                                
                                                
-- stdout --
	* [old-k8s-version-545000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-545000" primary control-plane node in "old-k8s-version-545000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-545000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:29:37.444810   16161 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:29:37.444940   16161 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:29:37.444944   16161 out.go:358] Setting ErrFile to fd 2...
	I0819 11:29:37.444946   16161 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:29:37.445075   16161 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:29:37.446169   16161 out.go:352] Setting JSON to false
	I0819 11:29:37.462530   16161 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7144,"bootTime":1724085033,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:29:37.462611   16161 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:29:37.469662   16161 out.go:177] * [old-k8s-version-545000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:29:37.477731   16161 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 11:29:37.477778   16161 notify.go:220] Checking for updates...
	I0819 11:29:37.483204   16161 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:29:37.486656   16161 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:29:37.489705   16161 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:29:37.492650   16161 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	I0819 11:29:37.495625   16161 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:29:37.498924   16161 config.go:182] Loaded profile config "multinode-540000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:29:37.499007   16161 config.go:182] Loaded profile config "stopped-upgrade-163000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 11:29:37.499059   16161 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:29:37.503689   16161 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:29:37.510607   16161 start.go:297] selected driver: qemu2
	I0819 11:29:37.510613   16161 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:29:37.510618   16161 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:29:37.512820   16161 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:29:37.516500   16161 out.go:177] * Automatically selected the socket_vmnet network
	I0819 11:29:37.519744   16161 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:29:37.519765   16161 cni.go:84] Creating CNI manager for ""
	I0819 11:29:37.519772   16161 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0819 11:29:37.519801   16161 start.go:340] cluster config:
	{Name:old-k8s-version-545000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-545000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:29:37.523805   16161 iso.go:125] acquiring lock: {Name:mk1182fa87ba49f1e009b3ded77c456c9e9e8e4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:29:37.531591   16161 out.go:177] * Starting "old-k8s-version-545000" primary control-plane node in "old-k8s-version-545000" cluster
	I0819 11:29:37.535642   16161 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0819 11:29:37.535657   16161 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0819 11:29:37.535668   16161 cache.go:56] Caching tarball of preloaded images
	I0819 11:29:37.535720   16161 preload.go:172] Found /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:29:37.535725   16161 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0819 11:29:37.535806   16161 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/old-k8s-version-545000/config.json ...
	I0819 11:29:37.535817   16161 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/old-k8s-version-545000/config.json: {Name:mk5954ab62312f5ab0e800ccb128a3551439b139 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:29:37.536148   16161 start.go:360] acquireMachinesLock for old-k8s-version-545000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:29:37.536180   16161 start.go:364] duration metric: took 24.791µs to acquireMachinesLock for "old-k8s-version-545000"
	I0819 11:29:37.536191   16161 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-545000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-545000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:29:37.536215   16161 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:29:37.540670   16161 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 11:29:37.555754   16161 start.go:159] libmachine.API.Create for "old-k8s-version-545000" (driver="qemu2")
	I0819 11:29:37.555785   16161 client.go:168] LocalClient.Create starting
	I0819 11:29:37.555840   16161 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:29:37.555871   16161 main.go:141] libmachine: Decoding PEM data...
	I0819 11:29:37.555880   16161 main.go:141] libmachine: Parsing certificate...
	I0819 11:29:37.555913   16161 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:29:37.555938   16161 main.go:141] libmachine: Decoding PEM data...
	I0819 11:29:37.555943   16161 main.go:141] libmachine: Parsing certificate...
	I0819 11:29:37.556283   16161 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:29:37.704108   16161 main.go:141] libmachine: Creating SSH key...
	I0819 11:29:37.824456   16161 main.go:141] libmachine: Creating Disk image...
	I0819 11:29:37.824464   16161 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:29:37.824700   16161 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/old-k8s-version-545000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/old-k8s-version-545000/disk.qcow2
	I0819 11:29:37.833914   16161 main.go:141] libmachine: STDOUT: 
	I0819 11:29:37.833943   16161 main.go:141] libmachine: STDERR: 
	I0819 11:29:37.833993   16161 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/old-k8s-version-545000/disk.qcow2 +20000M
	I0819 11:29:37.841880   16161 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:29:37.841899   16161 main.go:141] libmachine: STDERR: 
	I0819 11:29:37.841913   16161 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/old-k8s-version-545000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/old-k8s-version-545000/disk.qcow2
	I0819 11:29:37.841918   16161 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:29:37.841928   16161 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:29:37.841957   16161 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/old-k8s-version-545000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/old-k8s-version-545000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/old-k8s-version-545000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:73:17:2a:2a:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/old-k8s-version-545000/disk.qcow2
	I0819 11:29:37.843734   16161 main.go:141] libmachine: STDOUT: 
	I0819 11:29:37.843761   16161 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:29:37.843780   16161 client.go:171] duration metric: took 287.991833ms to LocalClient.Create
	I0819 11:29:39.846014   16161 start.go:128] duration metric: took 2.309775417s to createHost
	I0819 11:29:39.846090   16161 start.go:83] releasing machines lock for "old-k8s-version-545000", held for 2.30991175s
	W0819 11:29:39.846194   16161 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:29:39.852391   16161 out.go:177] * Deleting "old-k8s-version-545000" in qemu2 ...
	W0819 11:29:39.880707   16161 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:29:39.880739   16161 start.go:729] Will try again in 5 seconds ...
	I0819 11:29:44.882913   16161 start.go:360] acquireMachinesLock for old-k8s-version-545000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:29:44.883360   16161 start.go:364] duration metric: took 318.958µs to acquireMachinesLock for "old-k8s-version-545000"
	I0819 11:29:44.883486   16161 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-545000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-545000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:29:44.883735   16161 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:29:44.892292   16161 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 11:29:44.937330   16161 start.go:159] libmachine.API.Create for "old-k8s-version-545000" (driver="qemu2")
	I0819 11:29:44.937391   16161 client.go:168] LocalClient.Create starting
	I0819 11:29:44.937504   16161 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:29:44.937564   16161 main.go:141] libmachine: Decoding PEM data...
	I0819 11:29:44.937579   16161 main.go:141] libmachine: Parsing certificate...
	I0819 11:29:44.937633   16161 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:29:44.937671   16161 main.go:141] libmachine: Decoding PEM data...
	I0819 11:29:44.937681   16161 main.go:141] libmachine: Parsing certificate...
	I0819 11:29:44.938181   16161 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:29:45.093707   16161 main.go:141] libmachine: Creating SSH key...
	I0819 11:29:45.523069   16161 main.go:141] libmachine: Creating Disk image...
	I0819 11:29:45.523082   16161 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:29:45.523358   16161 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/old-k8s-version-545000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/old-k8s-version-545000/disk.qcow2
	I0819 11:29:45.533431   16161 main.go:141] libmachine: STDOUT: 
	I0819 11:29:45.533457   16161 main.go:141] libmachine: STDERR: 
	I0819 11:29:45.533546   16161 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/old-k8s-version-545000/disk.qcow2 +20000M
	I0819 11:29:45.542173   16161 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:29:45.542187   16161 main.go:141] libmachine: STDERR: 
	I0819 11:29:45.542200   16161 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/old-k8s-version-545000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/old-k8s-version-545000/disk.qcow2
	I0819 11:29:45.542212   16161 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:29:45.542220   16161 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:29:45.542254   16161 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/old-k8s-version-545000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/old-k8s-version-545000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/old-k8s-version-545000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:dc:06:c2:a5:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/old-k8s-version-545000/disk.qcow2
	I0819 11:29:45.543883   16161 main.go:141] libmachine: STDOUT: 
	I0819 11:29:45.543896   16161 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:29:45.543911   16161 client.go:171] duration metric: took 606.516583ms to LocalClient.Create
	I0819 11:29:47.546126   16161 start.go:128] duration metric: took 2.662355667s to createHost
	I0819 11:29:47.546195   16161 start.go:83] releasing machines lock for "old-k8s-version-545000", held for 2.662830417s
	W0819 11:29:47.546393   16161 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-545000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-545000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:29:47.550995   16161 out.go:201] 
	W0819 11:29:47.561831   16161 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:29:47.561847   16161 out.go:270] * 
	* 
	W0819 11:29:47.563404   16161 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:29:47.574791   16161 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-545000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-545000 -n old-k8s-version-545000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-545000 -n old-k8s-version-545000: exit status 7 (56.289792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-545000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-545000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-545000 create -f testdata/busybox.yaml: exit status 1 (28.676125ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-545000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-545000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-545000 -n old-k8s-version-545000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-545000 -n old-k8s-version-545000: exit status 7 (29.03625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-545000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-545000 -n old-k8s-version-545000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-545000 -n old-k8s-version-545000: exit status 7 (29.647792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-545000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-545000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-545000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-545000 describe deploy/metrics-server -n kube-system: exit status 1 (27.0025ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-545000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-545000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-545000 -n old-k8s-version-545000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-545000 -n old-k8s-version-545000: exit status 7 (29.64525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-545000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-545000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-545000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.190171084s)

                                                
                                                
-- stdout --
	* [old-k8s-version-545000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-545000" primary control-plane node in "old-k8s-version-545000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-545000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-545000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:29:50.030554   16209 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:29:50.030689   16209 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:29:50.030693   16209 out.go:358] Setting ErrFile to fd 2...
	I0819 11:29:50.030695   16209 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:29:50.030830   16209 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:29:50.031968   16209 out.go:352] Setting JSON to false
	I0819 11:29:50.049113   16209 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7157,"bootTime":1724085033,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:29:50.049184   16209 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:29:50.054045   16209 out.go:177] * [old-k8s-version-545000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:29:50.061072   16209 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 11:29:50.061133   16209 notify.go:220] Checking for updates...
	I0819 11:29:50.069050   16209 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:29:50.072102   16209 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:29:50.075134   16209 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:29:50.078130   16209 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	I0819 11:29:50.081110   16209 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:29:50.084377   16209 config.go:182] Loaded profile config "old-k8s-version-545000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0819 11:29:50.088111   16209 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0819 11:29:50.091020   16209 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:29:50.095046   16209 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 11:29:50.101004   16209 start.go:297] selected driver: qemu2
	I0819 11:29:50.101011   16209 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-545000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-545000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:29:50.101080   16209 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:29:50.103627   16209 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:29:50.103682   16209 cni.go:84] Creating CNI manager for ""
	I0819 11:29:50.103690   16209 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0819 11:29:50.103718   16209 start.go:340] cluster config:
	{Name:old-k8s-version-545000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-545000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:29:50.107399   16209 iso.go:125] acquiring lock: {Name:mk1182fa87ba49f1e009b3ded77c456c9e9e8e4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:29:50.116135   16209 out.go:177] * Starting "old-k8s-version-545000" primary control-plane node in "old-k8s-version-545000" cluster
	I0819 11:29:50.120092   16209 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0819 11:29:50.120105   16209 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0819 11:29:50.120114   16209 cache.go:56] Caching tarball of preloaded images
	I0819 11:29:50.120174   16209 preload.go:172] Found /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:29:50.120180   16209 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0819 11:29:50.120236   16209 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/old-k8s-version-545000/config.json ...
	I0819 11:29:50.120571   16209 start.go:360] acquireMachinesLock for old-k8s-version-545000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:29:50.120600   16209 start.go:364] duration metric: took 21.792µs to acquireMachinesLock for "old-k8s-version-545000"
	I0819 11:29:50.120609   16209 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:29:50.120617   16209 fix.go:54] fixHost starting: 
	I0819 11:29:50.120733   16209 fix.go:112] recreateIfNeeded on old-k8s-version-545000: state=Stopped err=<nil>
	W0819 11:29:50.120741   16209 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:29:50.124980   16209 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-545000" ...
	I0819 11:29:50.132084   16209 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:29:50.132118   16209 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/old-k8s-version-545000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/old-k8s-version-545000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/old-k8s-version-545000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:dc:06:c2:a5:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/old-k8s-version-545000/disk.qcow2
	I0819 11:29:50.134061   16209 main.go:141] libmachine: STDOUT: 
	I0819 11:29:50.134079   16209 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:29:50.134107   16209 fix.go:56] duration metric: took 13.492208ms for fixHost
	I0819 11:29:50.134111   16209 start.go:83] releasing machines lock for "old-k8s-version-545000", held for 13.50725ms
	W0819 11:29:50.134117   16209 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:29:50.134147   16209 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:29:50.134151   16209 start.go:729] Will try again in 5 seconds ...
	I0819 11:29:55.136362   16209 start.go:360] acquireMachinesLock for old-k8s-version-545000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:29:55.136674   16209 start.go:364] duration metric: took 240.083µs to acquireMachinesLock for "old-k8s-version-545000"
	I0819 11:29:55.136768   16209 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:29:55.136782   16209 fix.go:54] fixHost starting: 
	I0819 11:29:55.137241   16209 fix.go:112] recreateIfNeeded on old-k8s-version-545000: state=Stopped err=<nil>
	W0819 11:29:55.137257   16209 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:29:55.147262   16209 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-545000" ...
	I0819 11:29:55.150185   16209 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:29:55.150317   16209 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/old-k8s-version-545000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/old-k8s-version-545000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/old-k8s-version-545000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:dc:06:c2:a5:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/old-k8s-version-545000/disk.qcow2
	I0819 11:29:55.156476   16209 main.go:141] libmachine: STDOUT: 
	I0819 11:29:55.156542   16209 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:29:55.156608   16209 fix.go:56] duration metric: took 19.826542ms for fixHost
	I0819 11:29:55.156620   16209 start.go:83] releasing machines lock for "old-k8s-version-545000", held for 19.928417ms
	W0819 11:29:55.156771   16209 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-545000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-545000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:29:55.164232   16209 out.go:201] 
	W0819 11:29:55.168256   16209 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:29:55.168271   16209 out.go:270] * 
	* 
	W0819 11:29:55.169921   16209 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:29:55.179161   16209 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-545000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-545000 -n old-k8s-version-545000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-545000 -n old-k8s-version-545000: exit status 7 (60.157917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-545000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-545000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-545000 -n old-k8s-version-545000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-545000 -n old-k8s-version-545000: exit status 7 (32.220208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-545000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-545000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-545000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-545000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.781875ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-545000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-545000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-545000 -n old-k8s-version-545000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-545000 -n old-k8s-version-545000: exit status 7 (28.963542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-545000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-545000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-545000 -n old-k8s-version-545000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-545000 -n old-k8s-version-545000: exit status 7 (29.153ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-545000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-545000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-545000 --alsologtostderr -v=1: exit status 83 (42.402625ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-545000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-545000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:29:55.439385   16234 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:29:55.440229   16234 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:29:55.440234   16234 out.go:358] Setting ErrFile to fd 2...
	I0819 11:29:55.440236   16234 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:29:55.440366   16234 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:29:55.440588   16234 out.go:352] Setting JSON to false
	I0819 11:29:55.440597   16234 mustload.go:65] Loading cluster: old-k8s-version-545000
	I0819 11:29:55.440779   16234 config.go:182] Loaded profile config "old-k8s-version-545000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0819 11:29:55.445632   16234 out.go:177] * The control-plane node old-k8s-version-545000 host is not running: state=Stopped
	I0819 11:29:55.449597   16234 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-545000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-545000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-545000 -n old-k8s-version-545000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-545000 -n old-k8s-version-545000: exit status 7 (28.945333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-545000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-545000 -n old-k8s-version-545000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-545000 -n old-k8s-version-545000: exit status 7 (29.066166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-545000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-732000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-732000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.890226125s)

                                                
                                                
-- stdout --
	* [no-preload-732000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-732000" primary control-plane node in "no-preload-732000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-732000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:29:55.760077   16251 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:29:55.760299   16251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:29:55.760303   16251 out.go:358] Setting ErrFile to fd 2...
	I0819 11:29:55.760305   16251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:29:55.760429   16251 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:29:55.761644   16251 out.go:352] Setting JSON to false
	I0819 11:29:55.778776   16251 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7162,"bootTime":1724085033,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:29:55.778849   16251 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:29:55.782684   16251 out.go:177] * [no-preload-732000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:29:55.788685   16251 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 11:29:55.788716   16251 notify.go:220] Checking for updates...
	I0819 11:29:55.794639   16251 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:29:55.797702   16251 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:29:55.800559   16251 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:29:55.803623   16251 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	I0819 11:29:55.806667   16251 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:29:55.808313   16251 config.go:182] Loaded profile config "multinode-540000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:29:55.808371   16251 config.go:182] Loaded profile config "stopped-upgrade-163000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 11:29:55.808421   16251 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:29:55.812695   16251 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:29:55.819485   16251 start.go:297] selected driver: qemu2
	I0819 11:29:55.819491   16251 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:29:55.819497   16251 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:29:55.821711   16251 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:29:55.824685   16251 out.go:177] * Automatically selected the socket_vmnet network
	I0819 11:29:55.827765   16251 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:29:55.827809   16251 cni.go:84] Creating CNI manager for ""
	I0819 11:29:55.827817   16251 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:29:55.827821   16251 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 11:29:55.827850   16251 start.go:340] cluster config:
	{Name:no-preload-732000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-732000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:29:55.831614   16251 iso.go:125] acquiring lock: {Name:mk1182fa87ba49f1e009b3ded77c456c9e9e8e4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:29:55.838703   16251 out.go:177] * Starting "no-preload-732000" primary control-plane node in "no-preload-732000" cluster
	I0819 11:29:55.842701   16251 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:29:55.842799   16251 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/no-preload-732000/config.json ...
	I0819 11:29:55.842800   16251 cache.go:107] acquiring lock: {Name:mk31de0b539d07f41dc67acc7eaa814658264c01 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:29:55.842800   16251 cache.go:107] acquiring lock: {Name:mk64a0dbd086912bce1440b78e0aa5d0cfe1f816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:29:55.842837   16251 cache.go:107] acquiring lock: {Name:mkf3dcfa0dc8618c60adb384e8cc29d6c97e1697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:29:55.842877   16251 cache.go:115] /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0819 11:29:55.842891   16251 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 96.583µs
	I0819 11:29:55.842898   16251 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0819 11:29:55.842908   16251 cache.go:107] acquiring lock: {Name:mkf13489e2df0b1b51fedc614f76b98f623d9894 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:29:55.842962   16251 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 11:29:55.842824   16251 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/no-preload-732000/config.json: {Name:mkf156648a7e1990bca34835a7c1385a3321ce65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:29:55.842988   16251 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 11:29:55.842994   16251 cache.go:107] acquiring lock: {Name:mkd934efdb59a9e7d8c3f69b95e92c66a775fd91 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:29:55.843021   16251 cache.go:107] acquiring lock: {Name:mke2d725ad4ded13b753e6e2f339dcf411846158 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:29:55.843026   16251 cache.go:107] acquiring lock: {Name:mkd6b18104352f0bd81c3ce9abe1ad449facca52 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:29:55.843065   16251 cache.go:107] acquiring lock: {Name:mka78d1dd00f9cba65ed1f6844910bbda1c959b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:29:55.843092   16251 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 11:29:55.843227   16251 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 11:29:55.843369   16251 start.go:360] acquireMachinesLock for no-preload-732000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:29:55.843379   16251 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0819 11:29:55.843409   16251 start.go:364] duration metric: took 27.792µs to acquireMachinesLock for "no-preload-732000"
	I0819 11:29:55.843411   16251 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 11:29:55.843421   16251 start.go:93] Provisioning new machine with config: &{Name:no-preload-732000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:no-preload-732000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:29:55.843467   16251 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:29:55.843471   16251 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0819 11:29:55.850645   16251 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 11:29:55.855662   16251 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 11:29:55.855785   16251 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 11:29:55.855991   16251 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 11:29:55.855947   16251 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0819 11:29:55.856364   16251 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 11:29:55.856316   16251 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 11:29:55.856390   16251 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0819 11:29:55.867957   16251 start.go:159] libmachine.API.Create for "no-preload-732000" (driver="qemu2")
	I0819 11:29:55.867983   16251 client.go:168] LocalClient.Create starting
	I0819 11:29:55.868072   16251 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:29:55.868105   16251 main.go:141] libmachine: Decoding PEM data...
	I0819 11:29:55.868115   16251 main.go:141] libmachine: Parsing certificate...
	I0819 11:29:55.868175   16251 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:29:55.868202   16251 main.go:141] libmachine: Decoding PEM data...
	I0819 11:29:55.868211   16251 main.go:141] libmachine: Parsing certificate...
	I0819 11:29:55.868639   16251 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:29:56.023069   16251 main.go:141] libmachine: Creating SSH key...
	I0819 11:29:56.164527   16251 main.go:141] libmachine: Creating Disk image...
	I0819 11:29:56.164546   16251 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:29:56.164785   16251 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/no-preload-732000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/no-preload-732000/disk.qcow2
	I0819 11:29:56.174254   16251 main.go:141] libmachine: STDOUT: 
	I0819 11:29:56.174269   16251 main.go:141] libmachine: STDERR: 
	I0819 11:29:56.174305   16251 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/no-preload-732000/disk.qcow2 +20000M
	I0819 11:29:56.182760   16251 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:29:56.182774   16251 main.go:141] libmachine: STDERR: 
	I0819 11:29:56.182785   16251 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/no-preload-732000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/no-preload-732000/disk.qcow2
	I0819 11:29:56.182789   16251 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:29:56.182800   16251 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:29:56.182824   16251 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/no-preload-732000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/no-preload-732000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/no-preload-732000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:3a:6e:ad:65:1e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/no-preload-732000/disk.qcow2
	I0819 11:29:56.184570   16251 main.go:141] libmachine: STDOUT: 
	I0819 11:29:56.184585   16251 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:29:56.184599   16251 client.go:171] duration metric: took 316.613958ms to LocalClient.Create
	I0819 11:29:56.268946   16251 cache.go:162] opening:  /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0
	I0819 11:29:56.282926   16251 cache.go:162] opening:  /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0819 11:29:56.283288   16251 cache.go:162] opening:  /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0
	I0819 11:29:56.296635   16251 cache.go:162] opening:  /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0819 11:29:56.322896   16251 cache.go:162] opening:  /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0819 11:29:56.340369   16251 cache.go:162] opening:  /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0
	I0819 11:29:56.391953   16251 cache.go:162] opening:  /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0819 11:29:56.445125   16251 cache.go:157] /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0819 11:29:56.445159   16251 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 602.120083ms
	I0819 11:29:56.445172   16251 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0819 11:29:58.184799   16251 start.go:128] duration metric: took 2.341318s to createHost
	I0819 11:29:58.184849   16251 start.go:83] releasing machines lock for "no-preload-732000", held for 2.341444708s
	W0819 11:29:58.184912   16251 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:29:58.194375   16251 out.go:177] * Deleting "no-preload-732000" in qemu2 ...
	W0819 11:29:58.219477   16251 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:29:58.219511   16251 start.go:729] Will try again in 5 seconds ...
	I0819 11:29:59.003886   16251 cache.go:157] /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0819 11:29:59.003915   16251 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 3.161020333s
	I0819 11:29:59.003928   16251 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0819 11:29:59.334275   16251 cache.go:157] /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 exists
	I0819 11:29:59.334305   16251 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0" -> "/Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0" took 3.491359292s
	I0819 11:29:59.334317   16251 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0 -> /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 succeeded
	I0819 11:29:59.344070   16251 cache.go:157] /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 exists
	I0819 11:29:59.344078   16251 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0" -> "/Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0" took 3.501069292s
	I0819 11:29:59.344094   16251 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0 -> /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 succeeded
	I0819 11:29:59.679444   16251 cache.go:157] /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 exists
	I0819 11:29:59.679472   16251 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0" -> "/Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0" took 3.836696458s
	I0819 11:29:59.679485   16251 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0 -> /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 succeeded
	I0819 11:30:00.526018   16251 cache.go:157] /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 exists
	I0819 11:30:00.526052   16251 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0" -> "/Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0" took 4.683234542s
	I0819 11:30:00.526069   16251 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0 -> /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 succeeded
	I0819 11:30:03.219762   16251 start.go:360] acquireMachinesLock for no-preload-732000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:30:03.220099   16251 start.go:364] duration metric: took 283.375µs to acquireMachinesLock for "no-preload-732000"
	I0819 11:30:03.220173   16251 start.go:93] Provisioning new machine with config: &{Name:no-preload-732000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:no-preload-732000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:30:03.220292   16251 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:30:03.226855   16251 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 11:30:03.270657   16251 start.go:159] libmachine.API.Create for "no-preload-732000" (driver="qemu2")
	I0819 11:30:03.270707   16251 client.go:168] LocalClient.Create starting
	I0819 11:30:03.270813   16251 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:30:03.270881   16251 main.go:141] libmachine: Decoding PEM data...
	I0819 11:30:03.270901   16251 main.go:141] libmachine: Parsing certificate...
	I0819 11:30:03.270981   16251 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:30:03.271032   16251 main.go:141] libmachine: Decoding PEM data...
	I0819 11:30:03.271046   16251 main.go:141] libmachine: Parsing certificate...
	I0819 11:30:03.271547   16251 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:30:03.459562   16251 main.go:141] libmachine: Creating SSH key...
	I0819 11:30:03.554683   16251 main.go:141] libmachine: Creating Disk image...
	I0819 11:30:03.554692   16251 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:30:03.554919   16251 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/no-preload-732000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/no-preload-732000/disk.qcow2
	I0819 11:30:03.564577   16251 main.go:141] libmachine: STDOUT: 
	I0819 11:30:03.564602   16251 main.go:141] libmachine: STDERR: 
	I0819 11:30:03.564658   16251 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/no-preload-732000/disk.qcow2 +20000M
	I0819 11:30:03.572799   16251 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:30:03.572812   16251 main.go:141] libmachine: STDERR: 
	I0819 11:30:03.572829   16251 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/no-preload-732000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/no-preload-732000/disk.qcow2
	I0819 11:30:03.572832   16251 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:30:03.572843   16251 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:30:03.572880   16251 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/no-preload-732000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/no-preload-732000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/no-preload-732000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:81:5d:9c:8a:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/no-preload-732000/disk.qcow2
	I0819 11:30:03.574541   16251 main.go:141] libmachine: STDOUT: 
	I0819 11:30:03.574555   16251 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:30:03.574568   16251 client.go:171] duration metric: took 303.856334ms to LocalClient.Create
	I0819 11:30:04.123919   16251 cache.go:157] /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0819 11:30:04.123952   16251 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 8.281042125s
	I0819 11:30:04.123961   16251 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0819 11:30:04.123984   16251 cache.go:87] Successfully saved all images to host disk.
	I0819 11:30:05.576813   16251 start.go:128] duration metric: took 2.3564805s to createHost
	I0819 11:30:05.576942   16251 start.go:83] releasing machines lock for "no-preload-732000", held for 2.356823583s
	W0819 11:30:05.577242   16251 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-732000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-732000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:30:05.590699   16251 out.go:201] 
	W0819 11:30:05.594745   16251 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:30:05.594761   16251 out.go:270] * 
	* 
	W0819 11:30:05.596106   16251 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:30:05.608709   16251 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-732000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-732000 -n no-preload-732000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-732000 -n no-preload-732000: exit status 7 (57.911791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-732000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-732000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-732000 create -f testdata/busybox.yaml: exit status 1 (29.575791ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-732000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-732000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-732000 -n no-preload-732000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-732000 -n no-preload-732000: exit status 7 (30.619167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-732000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-732000 -n no-preload-732000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-732000 -n no-preload-732000: exit status 7 (29.8285ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-732000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-732000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-732000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-732000 describe deploy/metrics-server -n kube-system: exit status 1 (28.260792ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-732000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-732000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-732000 -n no-preload-732000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-732000 -n no-preload-732000: exit status 7 (28.851041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-732000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-750000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-750000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.936552042s)

                                                
                                                
-- stdout --
	* [embed-certs-750000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-750000" primary control-plane node in "embed-certs-750000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-750000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:30:08.167856   16613 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:30:08.167987   16613 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:30:08.167991   16613 out.go:358] Setting ErrFile to fd 2...
	I0819 11:30:08.167994   16613 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:30:08.168135   16613 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:30:08.169168   16613 out.go:352] Setting JSON to false
	I0819 11:30:08.185197   16613 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7175,"bootTime":1724085033,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:30:08.185294   16613 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:30:08.190548   16613 out.go:177] * [embed-certs-750000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:30:08.198508   16613 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 11:30:08.198539   16613 notify.go:220] Checking for updates...
	I0819 11:30:08.205438   16613 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:30:08.208486   16613 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:30:08.211537   16613 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:30:08.214504   16613 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	I0819 11:30:08.217465   16613 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:30:08.220906   16613 config.go:182] Loaded profile config "multinode-540000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:30:08.220982   16613 config.go:182] Loaded profile config "no-preload-732000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:30:08.221031   16613 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:30:08.225460   16613 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:30:08.232509   16613 start.go:297] selected driver: qemu2
	I0819 11:30:08.232514   16613 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:30:08.232520   16613 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:30:08.234953   16613 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:30:08.237459   16613 out.go:177] * Automatically selected the socket_vmnet network
	I0819 11:30:08.240552   16613 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:30:08.240586   16613 cni.go:84] Creating CNI manager for ""
	I0819 11:30:08.240592   16613 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:30:08.240600   16613 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 11:30:08.240630   16613 start.go:340] cluster config:
	{Name:embed-certs-750000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-750000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:30:08.244297   16613 iso.go:125] acquiring lock: {Name:mk1182fa87ba49f1e009b3ded77c456c9e9e8e4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:30:08.251512   16613 out.go:177] * Starting "embed-certs-750000" primary control-plane node in "embed-certs-750000" cluster
	I0819 11:30:08.255489   16613 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:30:08.255507   16613 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:30:08.255520   16613 cache.go:56] Caching tarball of preloaded images
	I0819 11:30:08.255590   16613 preload.go:172] Found /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:30:08.255597   16613 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:30:08.255668   16613 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/embed-certs-750000/config.json ...
	I0819 11:30:08.255681   16613 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/embed-certs-750000/config.json: {Name:mka1729d9d094ab4242413d820f34622dd6a4c96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:30:08.255925   16613 start.go:360] acquireMachinesLock for embed-certs-750000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:30:08.255968   16613 start.go:364] duration metric: took 35.625µs to acquireMachinesLock for "embed-certs-750000"
	I0819 11:30:08.255984   16613 start.go:93] Provisioning new machine with config: &{Name:embed-certs-750000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:embed-certs-750000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:30:08.256026   16613 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:30:08.263428   16613 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 11:30:08.281198   16613 start.go:159] libmachine.API.Create for "embed-certs-750000" (driver="qemu2")
	I0819 11:30:08.281230   16613 client.go:168] LocalClient.Create starting
	I0819 11:30:08.281299   16613 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:30:08.281338   16613 main.go:141] libmachine: Decoding PEM data...
	I0819 11:30:08.281347   16613 main.go:141] libmachine: Parsing certificate...
	I0819 11:30:08.281386   16613 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:30:08.281413   16613 main.go:141] libmachine: Decoding PEM data...
	I0819 11:30:08.281424   16613 main.go:141] libmachine: Parsing certificate...
	I0819 11:30:08.281790   16613 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:30:08.434082   16613 main.go:141] libmachine: Creating SSH key...
	I0819 11:30:08.535008   16613 main.go:141] libmachine: Creating Disk image...
	I0819 11:30:08.535014   16613 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:30:08.535245   16613 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/embed-certs-750000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/embed-certs-750000/disk.qcow2
	I0819 11:30:08.544383   16613 main.go:141] libmachine: STDOUT: 
	I0819 11:30:08.544401   16613 main.go:141] libmachine: STDERR: 
	I0819 11:30:08.544449   16613 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/embed-certs-750000/disk.qcow2 +20000M
	I0819 11:30:08.552337   16613 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:30:08.552355   16613 main.go:141] libmachine: STDERR: 
	I0819 11:30:08.552370   16613 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/embed-certs-750000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/embed-certs-750000/disk.qcow2
	I0819 11:30:08.552376   16613 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:30:08.552387   16613 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:30:08.552418   16613 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/embed-certs-750000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/embed-certs-750000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/embed-certs-750000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:2c:37:4d:39:68 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/embed-certs-750000/disk.qcow2
	I0819 11:30:08.554035   16613 main.go:141] libmachine: STDOUT: 
	I0819 11:30:08.554057   16613 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:30:08.554075   16613 client.go:171] duration metric: took 272.842125ms to LocalClient.Create
	I0819 11:30:10.556258   16613 start.go:128] duration metric: took 2.300219209s to createHost
	I0819 11:30:10.556333   16613 start.go:83] releasing machines lock for "embed-certs-750000", held for 2.300366208s
	W0819 11:30:10.556395   16613 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:30:10.574911   16613 out.go:177] * Deleting "embed-certs-750000" in qemu2 ...
	W0819 11:30:10.605303   16613 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:30:10.605331   16613 start.go:729] Will try again in 5 seconds ...
	I0819 11:30:15.607435   16613 start.go:360] acquireMachinesLock for embed-certs-750000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:30:15.623298   16613 start.go:364] duration metric: took 15.793625ms to acquireMachinesLock for "embed-certs-750000"
	I0819 11:30:15.623379   16613 start.go:93] Provisioning new machine with config: &{Name:embed-certs-750000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:embed-certs-750000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:30:15.623590   16613 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:30:15.636688   16613 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 11:30:15.683869   16613 start.go:159] libmachine.API.Create for "embed-certs-750000" (driver="qemu2")
	I0819 11:30:15.683915   16613 client.go:168] LocalClient.Create starting
	I0819 11:30:15.684038   16613 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:30:15.684110   16613 main.go:141] libmachine: Decoding PEM data...
	I0819 11:30:15.684125   16613 main.go:141] libmachine: Parsing certificate...
	I0819 11:30:15.684196   16613 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:30:15.684244   16613 main.go:141] libmachine: Decoding PEM data...
	I0819 11:30:15.684260   16613 main.go:141] libmachine: Parsing certificate...
	I0819 11:30:15.684753   16613 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:30:15.843384   16613 main.go:141] libmachine: Creating SSH key...
	I0819 11:30:16.008524   16613 main.go:141] libmachine: Creating Disk image...
	I0819 11:30:16.008533   16613 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:30:16.008753   16613 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/embed-certs-750000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/embed-certs-750000/disk.qcow2
	I0819 11:30:16.018528   16613 main.go:141] libmachine: STDOUT: 
	I0819 11:30:16.018558   16613 main.go:141] libmachine: STDERR: 
	I0819 11:30:16.018615   16613 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/embed-certs-750000/disk.qcow2 +20000M
	I0819 11:30:16.028229   16613 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:30:16.028247   16613 main.go:141] libmachine: STDERR: 
	I0819 11:30:16.028265   16613 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/embed-certs-750000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/embed-certs-750000/disk.qcow2
	I0819 11:30:16.028270   16613 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:30:16.028279   16613 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:30:16.028307   16613 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/embed-certs-750000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/embed-certs-750000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/embed-certs-750000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:87:43:a5:f1:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/embed-certs-750000/disk.qcow2
	I0819 11:30:16.030484   16613 main.go:141] libmachine: STDOUT: 
	I0819 11:30:16.030506   16613 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:30:16.030519   16613 client.go:171] duration metric: took 346.600333ms to LocalClient.Create
	I0819 11:30:18.032817   16613 start.go:128] duration metric: took 2.409169625s to createHost
	I0819 11:30:18.032920   16613 start.go:83] releasing machines lock for "embed-certs-750000", held for 2.409585958s
	W0819 11:30:18.033291   16613 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-750000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-750000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:30:18.048013   16613 out.go:201] 
	W0819 11:30:18.049781   16613 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:30:18.049823   16613 out.go:270] * 
	* 
	W0819 11:30:18.052449   16613 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:30:18.063962   16613 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-750000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-750000 -n embed-certs-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-750000 -n embed-certs-750000: exit status 7 (52.465791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-750000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (6.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-732000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-732000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (6.580333792s)

                                                
                                                
-- stdout --
	* [no-preload-732000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-732000" primary control-plane node in "no-preload-732000" cluster
	* Restarting existing qemu2 VM for "no-preload-732000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-732000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:30:09.111359   16631 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:30:09.111480   16631 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:30:09.111484   16631 out.go:358] Setting ErrFile to fd 2...
	I0819 11:30:09.111486   16631 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:30:09.111610   16631 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:30:09.112611   16631 out.go:352] Setting JSON to false
	I0819 11:30:09.128518   16631 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7176,"bootTime":1724085033,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:30:09.128585   16631 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:30:09.133498   16631 out.go:177] * [no-preload-732000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:30:09.139521   16631 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 11:30:09.139595   16631 notify.go:220] Checking for updates...
	I0819 11:30:09.146504   16631 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:30:09.149559   16631 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:30:09.152540   16631 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:30:09.155477   16631 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	I0819 11:30:09.158406   16631 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:30:09.161844   16631 config.go:182] Loaded profile config "no-preload-732000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:30:09.162116   16631 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:30:09.166465   16631 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 11:30:09.173498   16631 start.go:297] selected driver: qemu2
	I0819 11:30:09.173507   16631 start.go:901] validating driver "qemu2" against &{Name:no-preload-732000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:no-preload-732000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:30:09.173554   16631 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:30:09.175654   16631 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:30:09.175681   16631 cni.go:84] Creating CNI manager for ""
	I0819 11:30:09.175688   16631 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:30:09.175711   16631 start.go:340] cluster config:
	{Name:no-preload-732000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-732000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:30:09.178986   16631 iso.go:125] acquiring lock: {Name:mk1182fa87ba49f1e009b3ded77c456c9e9e8e4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:30:09.187406   16631 out.go:177] * Starting "no-preload-732000" primary control-plane node in "no-preload-732000" cluster
	I0819 11:30:09.191459   16631 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:30:09.191559   16631 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/no-preload-732000/config.json ...
	I0819 11:30:09.191563   16631 cache.go:107] acquiring lock: {Name:mk64a0dbd086912bce1440b78e0aa5d0cfe1f816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:30:09.191570   16631 cache.go:107] acquiring lock: {Name:mkf3dcfa0dc8618c60adb384e8cc29d6c97e1697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:30:09.191570   16631 cache.go:107] acquiring lock: {Name:mk31de0b539d07f41dc67acc7eaa814658264c01 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:30:09.191630   16631 cache.go:115] /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0819 11:30:09.191634   16631 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 76.083µs
	I0819 11:30:09.191645   16631 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0819 11:30:09.191643   16631 cache.go:115] /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 exists
	I0819 11:30:09.191651   16631 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0" -> "/Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0" took 82.708µs
	I0819 11:30:09.191655   16631 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0 -> /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 succeeded
	I0819 11:30:09.191651   16631 cache.go:107] acquiring lock: {Name:mka78d1dd00f9cba65ed1f6844910bbda1c959b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:30:09.191662   16631 cache.go:107] acquiring lock: {Name:mkd934efdb59a9e7d8c3f69b95e92c66a775fd91 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:30:09.191667   16631 cache.go:115] /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 exists
	I0819 11:30:09.191672   16631 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0" -> "/Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0" took 113.583µs
	I0819 11:30:09.191677   16631 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0 -> /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 succeeded
	I0819 11:30:09.191696   16631 cache.go:115] /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0819 11:30:09.191702   16631 cache.go:115] /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0819 11:30:09.191704   16631 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 53.458µs
	I0819 11:30:09.191705   16631 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 44.084µs
	I0819 11:30:09.191710   16631 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0819 11:30:09.191708   16631 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0819 11:30:09.191684   16631 cache.go:107] acquiring lock: {Name:mke2d725ad4ded13b753e6e2f339dcf411846158 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:30:09.191759   16631 cache.go:107] acquiring lock: {Name:mkd6b18104352f0bd81c3ce9abe1ad449facca52 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:30:09.191776   16631 cache.go:115] /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 exists
	I0819 11:30:09.191782   16631 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0" -> "/Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0" took 99.125µs
	I0819 11:30:09.191786   16631 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0 -> /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 succeeded
	I0819 11:30:09.191783   16631 cache.go:107] acquiring lock: {Name:mkf13489e2df0b1b51fedc614f76b98f623d9894 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:30:09.191813   16631 cache.go:115] /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 exists
	I0819 11:30:09.191818   16631 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0" -> "/Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0" took 85.708µs
	I0819 11:30:09.191824   16631 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0 -> /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 succeeded
	I0819 11:30:09.191846   16631 cache.go:115] /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0819 11:30:09.191851   16631 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 115.542µs
	I0819 11:30:09.191860   16631 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0819 11:30:09.191864   16631 cache.go:87] Successfully saved all images to host disk.
	I0819 11:30:09.191988   16631 start.go:360] acquireMachinesLock for no-preload-732000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:30:10.556467   16631 start.go:364] duration metric: took 1.364432s to acquireMachinesLock for "no-preload-732000"
	I0819 11:30:10.556611   16631 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:30:10.556645   16631 fix.go:54] fixHost starting: 
	I0819 11:30:10.557324   16631 fix.go:112] recreateIfNeeded on no-preload-732000: state=Stopped err=<nil>
	W0819 11:30:10.557366   16631 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:30:10.566935   16631 out.go:177] * Restarting existing qemu2 VM for "no-preload-732000" ...
	I0819 11:30:10.577846   16631 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:30:10.578069   16631 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/no-preload-732000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/no-preload-732000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/no-preload-732000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:81:5d:9c:8a:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/no-preload-732000/disk.qcow2
	I0819 11:30:10.589202   16631 main.go:141] libmachine: STDOUT: 
	I0819 11:30:10.589288   16631 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:30:10.589425   16631 fix.go:56] duration metric: took 32.780416ms for fixHost
	I0819 11:30:10.589446   16631 start.go:83] releasing machines lock for "no-preload-732000", held for 32.949833ms
	W0819 11:30:10.589491   16631 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:30:10.589659   16631 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:30:10.589678   16631 start.go:729] Will try again in 5 seconds ...
	I0819 11:30:15.591922   16631 start.go:360] acquireMachinesLock for no-preload-732000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:30:15.592354   16631 start.go:364] duration metric: took 332.459µs to acquireMachinesLock for "no-preload-732000"
	I0819 11:30:15.592487   16631 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:30:15.592508   16631 fix.go:54] fixHost starting: 
	I0819 11:30:15.593219   16631 fix.go:112] recreateIfNeeded on no-preload-732000: state=Stopped err=<nil>
	W0819 11:30:15.593244   16631 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:30:15.605916   16631 out.go:177] * Restarting existing qemu2 VM for "no-preload-732000" ...
	I0819 11:30:15.612677   16631 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:30:15.613010   16631 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/no-preload-732000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/no-preload-732000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/no-preload-732000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:81:5d:9c:8a:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/no-preload-732000/disk.qcow2
	I0819 11:30:15.622974   16631 main.go:141] libmachine: STDOUT: 
	I0819 11:30:15.623059   16631 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:30:15.623164   16631 fix.go:56] duration metric: took 30.6595ms for fixHost
	I0819 11:30:15.623192   16631 start.go:83] releasing machines lock for "no-preload-732000", held for 30.815208ms
	W0819 11:30:15.623447   16631 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-732000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-732000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:30:15.639657   16631 out.go:201] 
	W0819 11:30:15.643789   16631 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:30:15.643817   16631 out.go:270] * 
	* 
	W0819 11:30:15.645634   16631 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:30:15.655691   16631 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-732000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-732000 -n no-preload-732000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-732000 -n no-preload-732000: exit status 7 (49.930334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-732000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (6.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-732000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-732000 -n no-preload-732000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-732000 -n no-preload-732000: exit status 7 (33.704083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-732000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-732000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-732000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-732000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.835833ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-732000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-732000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-732000 -n no-preload-732000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-732000 -n no-preload-732000: exit status 7 (33.718ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-732000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-732000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-732000 -n no-preload-732000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-732000 -n no-preload-732000: exit status 7 (30.794875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-732000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-732000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-732000 --alsologtostderr -v=1: exit status 83 (45.104667ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-732000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-732000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:30:16.039526   16652 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:30:16.039666   16652 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:30:16.039669   16652 out.go:358] Setting ErrFile to fd 2...
	I0819 11:30:16.039679   16652 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:30:16.039821   16652 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:30:16.040053   16652 out.go:352] Setting JSON to false
	I0819 11:30:16.040062   16652 mustload.go:65] Loading cluster: no-preload-732000
	I0819 11:30:16.040278   16652 config.go:182] Loaded profile config "no-preload-732000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:30:16.044798   16652 out.go:177] * The control-plane node no-preload-732000 host is not running: state=Stopped
	I0819 11:30:16.048739   16652 out.go:177]   To start a cluster, run: "minikube start -p no-preload-732000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-732000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-732000 -n no-preload-732000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-732000 -n no-preload-732000: exit status 7 (28.658792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-732000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-732000 -n no-preload-732000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-732000 -n no-preload-732000: exit status 7 (29.211375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-732000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-406000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-406000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (11.464116792s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-406000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-406000" primary control-plane node in "default-k8s-diff-port-406000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-406000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:30:16.463199   16678 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:30:16.463318   16678 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:30:16.463321   16678 out.go:358] Setting ErrFile to fd 2...
	I0819 11:30:16.463323   16678 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:30:16.463463   16678 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:30:16.464515   16678 out.go:352] Setting JSON to false
	I0819 11:30:16.480772   16678 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7183,"bootTime":1724085033,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:30:16.480833   16678 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:30:16.485363   16678 out.go:177] * [default-k8s-diff-port-406000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:30:16.491318   16678 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 11:30:16.491368   16678 notify.go:220] Checking for updates...
	I0819 11:30:16.499227   16678 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:30:16.502334   16678 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:30:16.505317   16678 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:30:16.508298   16678 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	I0819 11:30:16.511322   16678 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:30:16.514684   16678 config.go:182] Loaded profile config "embed-certs-750000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:30:16.514745   16678 config.go:182] Loaded profile config "multinode-540000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:30:16.514792   16678 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:30:16.519271   16678 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:30:16.526379   16678 start.go:297] selected driver: qemu2
	I0819 11:30:16.526387   16678 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:30:16.526404   16678 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:30:16.528750   16678 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:30:16.531229   16678 out.go:177] * Automatically selected the socket_vmnet network
	I0819 11:30:16.534371   16678 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:30:16.534390   16678 cni.go:84] Creating CNI manager for ""
	I0819 11:30:16.534396   16678 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:30:16.534405   16678 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 11:30:16.534436   16678 start.go:340] cluster config:
	{Name:default-k8s-diff-port-406000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-406000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:30:16.538112   16678 iso.go:125] acquiring lock: {Name:mk1182fa87ba49f1e009b3ded77c456c9e9e8e4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:30:16.545197   16678 out.go:177] * Starting "default-k8s-diff-port-406000" primary control-plane node in "default-k8s-diff-port-406000" cluster
	I0819 11:30:16.549291   16678 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:30:16.549308   16678 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:30:16.549318   16678 cache.go:56] Caching tarball of preloaded images
	I0819 11:30:16.549388   16678 preload.go:172] Found /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:30:16.549401   16678 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:30:16.549471   16678 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/default-k8s-diff-port-406000/config.json ...
	I0819 11:30:16.549483   16678 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/default-k8s-diff-port-406000/config.json: {Name:mk8089c4ff8affa28db7e8e2e111c12b23118f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:30:16.549721   16678 start.go:360] acquireMachinesLock for default-k8s-diff-port-406000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:30:18.033077   16678 start.go:364] duration metric: took 1.483338208s to acquireMachinesLock for "default-k8s-diff-port-406000"
	I0819 11:30:18.033324   16678 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-406000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-406000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:30:18.033521   16678 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:30:18.048022   16678 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 11:30:18.099116   16678 start.go:159] libmachine.API.Create for "default-k8s-diff-port-406000" (driver="qemu2")
	I0819 11:30:18.099168   16678 client.go:168] LocalClient.Create starting
	I0819 11:30:18.099287   16678 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:30:18.099386   16678 main.go:141] libmachine: Decoding PEM data...
	I0819 11:30:18.099401   16678 main.go:141] libmachine: Parsing certificate...
	I0819 11:30:18.099467   16678 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:30:18.099512   16678 main.go:141] libmachine: Decoding PEM data...
	I0819 11:30:18.099529   16678 main.go:141] libmachine: Parsing certificate...
	I0819 11:30:18.100112   16678 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:30:18.257474   16678 main.go:141] libmachine: Creating SSH key...
	I0819 11:30:18.392879   16678 main.go:141] libmachine: Creating Disk image...
	I0819 11:30:18.392889   16678 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:30:18.396460   16678 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/default-k8s-diff-port-406000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/default-k8s-diff-port-406000/disk.qcow2
	I0819 11:30:18.410495   16678 main.go:141] libmachine: STDOUT: 
	I0819 11:30:18.410524   16678 main.go:141] libmachine: STDERR: 
	I0819 11:30:18.410604   16678 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/default-k8s-diff-port-406000/disk.qcow2 +20000M
	I0819 11:30:18.421220   16678 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:30:18.421258   16678 main.go:141] libmachine: STDERR: 
	I0819 11:30:18.421286   16678 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/default-k8s-diff-port-406000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/default-k8s-diff-port-406000/disk.qcow2
	I0819 11:30:18.421311   16678 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:30:18.421327   16678 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:30:18.421360   16678 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/default-k8s-diff-port-406000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/default-k8s-diff-port-406000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/default-k8s-diff-port-406000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:24:da:f0:39:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/default-k8s-diff-port-406000/disk.qcow2
	I0819 11:30:18.423459   16678 main.go:141] libmachine: STDOUT: 
	I0819 11:30:18.423483   16678 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:30:18.423504   16678 client.go:171] duration metric: took 324.333209ms to LocalClient.Create
	I0819 11:30:20.425764   16678 start.go:128] duration metric: took 2.392224542s to createHost
	I0819 11:30:20.425830   16678 start.go:83] releasing machines lock for "default-k8s-diff-port-406000", held for 2.392697625s
	W0819 11:30:20.425902   16678 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:30:20.431916   16678 out.go:177] * Deleting "default-k8s-diff-port-406000" in qemu2 ...
	W0819 11:30:20.459880   16678 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:30:20.459921   16678 start.go:729] Will try again in 5 seconds ...
	I0819 11:30:25.462098   16678 start.go:360] acquireMachinesLock for default-k8s-diff-port-406000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:30:25.462455   16678 start.go:364] duration metric: took 282.458µs to acquireMachinesLock for "default-k8s-diff-port-406000"
	I0819 11:30:25.462567   16678 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-406000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-406000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:30:25.462920   16678 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:30:25.471450   16678 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 11:30:25.521532   16678 start.go:159] libmachine.API.Create for "default-k8s-diff-port-406000" (driver="qemu2")
	I0819 11:30:25.521575   16678 client.go:168] LocalClient.Create starting
	I0819 11:30:25.521687   16678 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:30:25.521769   16678 main.go:141] libmachine: Decoding PEM data...
	I0819 11:30:25.521783   16678 main.go:141] libmachine: Parsing certificate...
	I0819 11:30:25.521863   16678 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:30:25.521912   16678 main.go:141] libmachine: Decoding PEM data...
	I0819 11:30:25.521926   16678 main.go:141] libmachine: Parsing certificate...
	I0819 11:30:25.522494   16678 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:30:25.695422   16678 main.go:141] libmachine: Creating SSH key...
	I0819 11:30:25.815953   16678 main.go:141] libmachine: Creating Disk image...
	I0819 11:30:25.815966   16678 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:30:25.816201   16678 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/default-k8s-diff-port-406000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/default-k8s-diff-port-406000/disk.qcow2
	I0819 11:30:25.825419   16678 main.go:141] libmachine: STDOUT: 
	I0819 11:30:25.825448   16678 main.go:141] libmachine: STDERR: 
	I0819 11:30:25.825498   16678 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/default-k8s-diff-port-406000/disk.qcow2 +20000M
	I0819 11:30:25.833502   16678 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:30:25.833516   16678 main.go:141] libmachine: STDERR: 
	I0819 11:30:25.833534   16678 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/default-k8s-diff-port-406000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/default-k8s-diff-port-406000/disk.qcow2
	I0819 11:30:25.833544   16678 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:30:25.833550   16678 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:30:25.833573   16678 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/default-k8s-diff-port-406000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/default-k8s-diff-port-406000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/default-k8s-diff-port-406000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:8b:c9:88:6d:36 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/default-k8s-diff-port-406000/disk.qcow2
	I0819 11:30:25.835155   16678 main.go:141] libmachine: STDOUT: 
	I0819 11:30:25.835170   16678 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:30:25.835187   16678 client.go:171] duration metric: took 313.607375ms to LocalClient.Create
	I0819 11:30:27.837474   16678 start.go:128] duration metric: took 2.374500458s to createHost
	I0819 11:30:27.837609   16678 start.go:83] releasing machines lock for "default-k8s-diff-port-406000", held for 2.375107375s
	W0819 11:30:27.837915   16678 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-406000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-406000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:30:27.850411   16678 out.go:201] 
	W0819 11:30:27.860511   16678 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:30:27.860538   16678 out.go:270] * 
	* 
	W0819 11:30:27.863075   16678 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:30:27.874380   16678 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-406000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-406000 -n default-k8s-diff-port-406000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-406000 -n default-k8s-diff-port-406000: exit status 7 (66.655167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-406000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-750000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-750000 create -f testdata/busybox.yaml: exit status 1 (30.748625ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-750000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-750000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-750000 -n embed-certs-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-750000 -n embed-certs-750000: exit status 7 (34.286292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-750000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-750000 -n embed-certs-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-750000 -n embed-certs-750000: exit status 7 (34.186542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-750000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-750000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-750000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-750000 describe deploy/metrics-server -n kube-system: exit status 1 (28.043ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-750000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-750000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-750000 -n embed-certs-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-750000 -n embed-certs-750000: exit status 7 (28.990459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-750000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-750000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-750000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.895826792s)

                                                
                                                
-- stdout --
	* [embed-certs-750000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-750000" primary control-plane node in "embed-certs-750000" cluster
	* Restarting existing qemu2 VM for "embed-certs-750000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-750000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:30:22.046156   16724 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:30:22.046290   16724 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:30:22.046294   16724 out.go:358] Setting ErrFile to fd 2...
	I0819 11:30:22.046296   16724 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:30:22.046441   16724 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:30:22.047459   16724 out.go:352] Setting JSON to false
	I0819 11:30:22.063393   16724 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7189,"bootTime":1724085033,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:30:22.063456   16724 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:30:22.068523   16724 out.go:177] * [embed-certs-750000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:30:22.076537   16724 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 11:30:22.076592   16724 notify.go:220] Checking for updates...
	I0819 11:30:22.084454   16724 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:30:22.087504   16724 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:30:22.090509   16724 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:30:22.093471   16724 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	I0819 11:30:22.096490   16724 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:30:22.099838   16724 config.go:182] Loaded profile config "embed-certs-750000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:30:22.100107   16724 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:30:22.103370   16724 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 11:30:22.110438   16724 start.go:297] selected driver: qemu2
	I0819 11:30:22.110443   16724 start.go:901] validating driver "qemu2" against &{Name:embed-certs-750000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:embed-certs-750000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:30:22.110497   16724 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:30:22.112900   16724 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:30:22.112945   16724 cni.go:84] Creating CNI manager for ""
	I0819 11:30:22.112953   16724 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:30:22.112976   16724 start.go:340] cluster config:
	{Name:embed-certs-750000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-750000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:30:22.116692   16724 iso.go:125] acquiring lock: {Name:mk1182fa87ba49f1e009b3ded77c456c9e9e8e4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:30:22.125444   16724 out.go:177] * Starting "embed-certs-750000" primary control-plane node in "embed-certs-750000" cluster
	I0819 11:30:22.129538   16724 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:30:22.129557   16724 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:30:22.129569   16724 cache.go:56] Caching tarball of preloaded images
	I0819 11:30:22.129634   16724 preload.go:172] Found /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:30:22.129640   16724 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:30:22.129706   16724 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/embed-certs-750000/config.json ...
	I0819 11:30:22.130159   16724 start.go:360] acquireMachinesLock for embed-certs-750000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:30:22.130199   16724 start.go:364] duration metric: took 33.208µs to acquireMachinesLock for "embed-certs-750000"
	I0819 11:30:22.130209   16724 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:30:22.130215   16724 fix.go:54] fixHost starting: 
	I0819 11:30:22.130339   16724 fix.go:112] recreateIfNeeded on embed-certs-750000: state=Stopped err=<nil>
	W0819 11:30:22.130349   16724 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:30:22.138497   16724 out.go:177] * Restarting existing qemu2 VM for "embed-certs-750000" ...
	I0819 11:30:22.142485   16724 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:30:22.142524   16724 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/embed-certs-750000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/embed-certs-750000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/embed-certs-750000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:87:43:a5:f1:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/embed-certs-750000/disk.qcow2
	I0819 11:30:22.144601   16724 main.go:141] libmachine: STDOUT: 
	I0819 11:30:22.144623   16724 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:30:22.144659   16724 fix.go:56] duration metric: took 14.445041ms for fixHost
	I0819 11:30:22.144664   16724 start.go:83] releasing machines lock for "embed-certs-750000", held for 14.45975ms
	W0819 11:30:22.144671   16724 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:30:22.144704   16724 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:30:22.144709   16724 start.go:729] Will try again in 5 seconds ...
	I0819 11:30:27.145495   16724 start.go:360] acquireMachinesLock for embed-certs-750000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:30:27.837749   16724 start.go:364] duration metric: took 692.108ms to acquireMachinesLock for "embed-certs-750000"
	I0819 11:30:27.837918   16724 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:30:27.837936   16724 fix.go:54] fixHost starting: 
	I0819 11:30:27.838639   16724 fix.go:112] recreateIfNeeded on embed-certs-750000: state=Stopped err=<nil>
	W0819 11:30:27.838665   16724 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:30:27.856358   16724 out.go:177] * Restarting existing qemu2 VM for "embed-certs-750000" ...
	I0819 11:30:27.863489   16724 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:30:27.863692   16724 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/embed-certs-750000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/embed-certs-750000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/embed-certs-750000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:87:43:a5:f1:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/embed-certs-750000/disk.qcow2
	I0819 11:30:27.872579   16724 main.go:141] libmachine: STDOUT: 
	I0819 11:30:27.872639   16724 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:30:27.872749   16724 fix.go:56] duration metric: took 34.81425ms for fixHost
	I0819 11:30:27.872770   16724 start.go:83] releasing machines lock for "embed-certs-750000", held for 34.976125ms
	W0819 11:30:27.872955   16724 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-750000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-750000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:30:27.885449   16724 out.go:201] 
	W0819 11:30:27.889492   16724 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:30:27.889533   16724 out.go:270] * 
	* 
	W0819 11:30:27.892331   16724 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:30:27.902547   16724 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-750000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-750000 -n embed-certs-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-750000 -n embed-certs-750000: exit status 7 (57.838041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-750000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-406000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-406000 create -f testdata/busybox.yaml: exit status 1 (31.287125ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-406000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-406000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-406000 -n default-k8s-diff-port-406000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-406000 -n default-k8s-diff-port-406000: exit status 7 (39.875417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-406000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-406000 -n default-k8s-diff-port-406000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-406000 -n default-k8s-diff-port-406000: exit status 7 (29.763666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-406000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-750000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-750000 -n embed-certs-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-750000 -n embed-certs-750000: exit status 7 (33.822541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-750000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-750000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-750000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-750000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.137125ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-750000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-750000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-750000 -n embed-certs-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-750000 -n embed-certs-750000: exit status 7 (33.473584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-750000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-750000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-750000 -n embed-certs-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-750000 -n embed-certs-750000: exit status 7 (31.245667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-750000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-406000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-406000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-406000 describe deploy/metrics-server -n kube-system: exit status 1 (28.64825ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-406000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-406000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-406000 -n default-k8s-diff-port-406000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-406000 -n default-k8s-diff-port-406000: exit status 7 (37.432209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-406000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-750000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-750000 --alsologtostderr -v=1: exit status 83 (48.884042ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-750000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:30:28.188941   16759 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:30:28.189097   16759 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:30:28.189101   16759 out.go:358] Setting ErrFile to fd 2...
	I0819 11:30:28.189103   16759 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:30:28.189259   16759 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:30:28.189494   16759 out.go:352] Setting JSON to false
	I0819 11:30:28.189505   16759 mustload.go:65] Loading cluster: embed-certs-750000
	I0819 11:30:28.189685   16759 config.go:182] Loaded profile config "embed-certs-750000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:30:28.196070   16759 out.go:177] * The control-plane node embed-certs-750000 host is not running: state=Stopped
	I0819 11:30:28.203034   16759 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-750000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-750000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-750000 -n embed-certs-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-750000 -n embed-certs-750000: exit status 7 (33.230583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-750000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-750000 -n embed-certs-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-750000 -n embed-certs-750000: exit status 7 (27.736417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-750000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-921000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-921000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.822027042s)

                                                
                                                
-- stdout --
	* [newest-cni-921000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-921000" primary control-plane node in "newest-cni-921000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-921000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:30:28.504850   16783 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:30:28.504978   16783 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:30:28.504982   16783 out.go:358] Setting ErrFile to fd 2...
	I0819 11:30:28.504984   16783 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:30:28.505134   16783 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:30:28.506222   16783 out.go:352] Setting JSON to false
	I0819 11:30:28.522519   16783 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7195,"bootTime":1724085033,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:30:28.522588   16783 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:30:28.527213   16783 out.go:177] * [newest-cni-921000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:30:28.534117   16783 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 11:30:28.534203   16783 notify.go:220] Checking for updates...
	I0819 11:30:28.541981   16783 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:30:28.545037   16783 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:30:28.548026   16783 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:30:28.551039   16783 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	I0819 11:30:28.554024   16783 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:30:28.555852   16783 config.go:182] Loaded profile config "default-k8s-diff-port-406000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:30:28.555912   16783 config.go:182] Loaded profile config "multinode-540000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:30:28.555972   16783 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:30:28.559981   16783 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:30:28.566803   16783 start.go:297] selected driver: qemu2
	I0819 11:30:28.566810   16783 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:30:28.566816   16783 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:30:28.569176   16783 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0819 11:30:28.569206   16783 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0819 11:30:28.577005   16783 out.go:177] * Automatically selected the socket_vmnet network
	I0819 11:30:28.578643   16783 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0819 11:30:28.578684   16783 cni.go:84] Creating CNI manager for ""
	I0819 11:30:28.578691   16783 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:30:28.578700   16783 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 11:30:28.578735   16783 start.go:340] cluster config:
	{Name:newest-cni-921000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:30:28.582423   16783 iso.go:125] acquiring lock: {Name:mk1182fa87ba49f1e009b3ded77c456c9e9e8e4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:30:28.590090   16783 out.go:177] * Starting "newest-cni-921000" primary control-plane node in "newest-cni-921000" cluster
	I0819 11:30:28.593953   16783 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:30:28.593977   16783 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:30:28.593990   16783 cache.go:56] Caching tarball of preloaded images
	I0819 11:30:28.594057   16783 preload.go:172] Found /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:30:28.594064   16783 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:30:28.594149   16783 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/newest-cni-921000/config.json ...
	I0819 11:30:28.594161   16783 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/newest-cni-921000/config.json: {Name:mk5c104023a1e451d1f31489f00d50911c5141d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:30:28.594403   16783 start.go:360] acquireMachinesLock for newest-cni-921000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:30:28.594439   16783 start.go:364] duration metric: took 29.834µs to acquireMachinesLock for "newest-cni-921000"
	I0819 11:30:28.594453   16783 start.go:93] Provisioning new machine with config: &{Name:newest-cni-921000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:newest-cni-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:30:28.594495   16783 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:30:28.598074   16783 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 11:30:28.615635   16783 start.go:159] libmachine.API.Create for "newest-cni-921000" (driver="qemu2")
	I0819 11:30:28.615661   16783 client.go:168] LocalClient.Create starting
	I0819 11:30:28.615717   16783 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:30:28.615747   16783 main.go:141] libmachine: Decoding PEM data...
	I0819 11:30:28.615756   16783 main.go:141] libmachine: Parsing certificate...
	I0819 11:30:28.615794   16783 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:30:28.615818   16783 main.go:141] libmachine: Decoding PEM data...
	I0819 11:30:28.615830   16783 main.go:141] libmachine: Parsing certificate...
	I0819 11:30:28.616234   16783 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:30:28.765598   16783 main.go:141] libmachine: Creating SSH key...
	I0819 11:30:28.804014   16783 main.go:141] libmachine: Creating Disk image...
	I0819 11:30:28.804022   16783 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:30:28.804253   16783 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/newest-cni-921000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/newest-cni-921000/disk.qcow2
	I0819 11:30:28.813465   16783 main.go:141] libmachine: STDOUT: 
	I0819 11:30:28.813484   16783 main.go:141] libmachine: STDERR: 
	I0819 11:30:28.813540   16783 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/newest-cni-921000/disk.qcow2 +20000M
	I0819 11:30:28.821518   16783 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:30:28.821538   16783 main.go:141] libmachine: STDERR: 
	I0819 11:30:28.821548   16783 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/newest-cni-921000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/newest-cni-921000/disk.qcow2
	I0819 11:30:28.821553   16783 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:30:28.821561   16783 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:30:28.821588   16783 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/newest-cni-921000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/newest-cni-921000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/newest-cni-921000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:50:e4:ff:df:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/newest-cni-921000/disk.qcow2
	I0819 11:30:28.823166   16783 main.go:141] libmachine: STDOUT: 
	I0819 11:30:28.823188   16783 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:30:28.823205   16783 client.go:171] duration metric: took 207.541084ms to LocalClient.Create
	I0819 11:30:30.825406   16783 start.go:128] duration metric: took 2.230893125s to createHost
	I0819 11:30:30.825474   16783 start.go:83] releasing machines lock for "newest-cni-921000", held for 2.231036333s
	W0819 11:30:30.825533   16783 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:30:30.840004   16783 out.go:177] * Deleting "newest-cni-921000" in qemu2 ...
	W0819 11:30:30.870274   16783 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:30:30.870296   16783 start.go:729] Will try again in 5 seconds ...
	I0819 11:30:35.871797   16783 start.go:360] acquireMachinesLock for newest-cni-921000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:30:35.877765   16783 start.go:364] duration metric: took 5.844125ms to acquireMachinesLock for "newest-cni-921000"
	I0819 11:30:35.877835   16783 start.go:93] Provisioning new machine with config: &{Name:newest-cni-921000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:newest-cni-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:30:35.878029   16783 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:30:35.888045   16783 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 11:30:35.934500   16783 start.go:159] libmachine.API.Create for "newest-cni-921000" (driver="qemu2")
	I0819 11:30:35.934549   16783 client.go:168] LocalClient.Create starting
	I0819 11:30:35.934697   16783 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/ca.pem
	I0819 11:30:35.934752   16783 main.go:141] libmachine: Decoding PEM data...
	I0819 11:30:35.934771   16783 main.go:141] libmachine: Parsing certificate...
	I0819 11:30:35.934837   16783 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19468-11838/.minikube/certs/cert.pem
	I0819 11:30:35.934893   16783 main.go:141] libmachine: Decoding PEM data...
	I0819 11:30:35.934904   16783 main.go:141] libmachine: Parsing certificate...
	I0819 11:30:35.935431   16783 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:30:36.095602   16783 main.go:141] libmachine: Creating SSH key...
	I0819 11:30:36.236419   16783 main.go:141] libmachine: Creating Disk image...
	I0819 11:30:36.236430   16783 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:30:36.236702   16783 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/newest-cni-921000/disk.qcow2.raw /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/newest-cni-921000/disk.qcow2
	I0819 11:30:36.247783   16783 main.go:141] libmachine: STDOUT: 
	I0819 11:30:36.247810   16783 main.go:141] libmachine: STDERR: 
	I0819 11:30:36.247874   16783 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/newest-cni-921000/disk.qcow2 +20000M
	I0819 11:30:36.257017   16783 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:30:36.257044   16783 main.go:141] libmachine: STDERR: 
	I0819 11:30:36.257054   16783 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/newest-cni-921000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/newest-cni-921000/disk.qcow2
	I0819 11:30:36.257060   16783 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:30:36.257070   16783 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:30:36.257109   16783 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/newest-cni-921000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/newest-cni-921000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/newest-cni-921000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:3f:2d:7c:cc:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/newest-cni-921000/disk.qcow2
	I0819 11:30:36.259440   16783 main.go:141] libmachine: STDOUT: 
	I0819 11:30:36.259475   16783 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:30:36.259490   16783 client.go:171] duration metric: took 324.905708ms to LocalClient.Create
	I0819 11:30:38.261835   16783 start.go:128] duration metric: took 2.383766167s to createHost
	I0819 11:30:38.261941   16783 start.go:83] releasing machines lock for "newest-cni-921000", held for 2.384146875s
	W0819 11:30:38.262268   16783 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-921000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-921000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:30:38.270777   16783 out.go:201] 
	W0819 11:30:38.275931   16783 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:30:38.275970   16783 out.go:270] * 
	* 
	W0819 11:30:38.278342   16783 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:30:38.288824   16783 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-921000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-921000 -n newest-cni-921000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-921000 -n newest-cni-921000: exit status 7 (67.620959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-921000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-406000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-406000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.516086166s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-406000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-406000" primary control-plane node in "default-k8s-diff-port-406000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-406000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-406000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:30:30.425910   16804 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:30:30.426060   16804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:30:30.426063   16804 out.go:358] Setting ErrFile to fd 2...
	I0819 11:30:30.426065   16804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:30:30.426208   16804 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:30:30.427195   16804 out.go:352] Setting JSON to false
	I0819 11:30:30.443017   16804 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7197,"bootTime":1724085033,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:30:30.443090   16804 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:30:30.448238   16804 out.go:177] * [default-k8s-diff-port-406000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:30:30.459217   16804 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 11:30:30.459274   16804 notify.go:220] Checking for updates...
	I0819 11:30:30.466272   16804 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:30:30.469245   16804 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:30:30.472270   16804 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:30:30.475346   16804 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	I0819 11:30:30.478222   16804 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:30:30.481567   16804 config.go:182] Loaded profile config "default-k8s-diff-port-406000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:30:30.481857   16804 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:30:30.486241   16804 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 11:30:30.493250   16804 start.go:297] selected driver: qemu2
	I0819 11:30:30.493257   16804 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-406000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-406000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:30:30.493325   16804 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:30:30.495600   16804 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:30:30.495629   16804 cni.go:84] Creating CNI manager for ""
	I0819 11:30:30.495637   16804 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:30:30.495668   16804 start.go:340] cluster config:
	{Name:default-k8s-diff-port-406000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-406000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:30:30.499138   16804 iso.go:125] acquiring lock: {Name:mk1182fa87ba49f1e009b3ded77c456c9e9e8e4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:30:30.505203   16804 out.go:177] * Starting "default-k8s-diff-port-406000" primary control-plane node in "default-k8s-diff-port-406000" cluster
	I0819 11:30:30.509262   16804 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:30:30.509277   16804 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:30:30.509285   16804 cache.go:56] Caching tarball of preloaded images
	I0819 11:30:30.509333   16804 preload.go:172] Found /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:30:30.509339   16804 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:30:30.509387   16804 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/default-k8s-diff-port-406000/config.json ...
	I0819 11:30:30.509886   16804 start.go:360] acquireMachinesLock for default-k8s-diff-port-406000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:30:30.825661   16804 start.go:364] duration metric: took 315.672458ms to acquireMachinesLock for "default-k8s-diff-port-406000"
	I0819 11:30:30.825769   16804 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:30:30.825797   16804 fix.go:54] fixHost starting: 
	I0819 11:30:30.826464   16804 fix.go:112] recreateIfNeeded on default-k8s-diff-port-406000: state=Stopped err=<nil>
	W0819 11:30:30.826511   16804 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:30:30.831144   16804 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-406000" ...
	I0819 11:30:30.844062   16804 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:30:30.844287   16804 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/default-k8s-diff-port-406000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/default-k8s-diff-port-406000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/default-k8s-diff-port-406000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:8b:c9:88:6d:36 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/default-k8s-diff-port-406000/disk.qcow2
	I0819 11:30:30.854580   16804 main.go:141] libmachine: STDOUT: 
	I0819 11:30:30.854654   16804 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:30:30.854769   16804 fix.go:56] duration metric: took 28.975458ms for fixHost
	I0819 11:30:30.854784   16804 start.go:83] releasing machines lock for "default-k8s-diff-port-406000", held for 29.073625ms
	W0819 11:30:30.854817   16804 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:30:30.854961   16804 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:30:30.854977   16804 start.go:729] Will try again in 5 seconds ...
	I0819 11:30:35.857160   16804 start.go:360] acquireMachinesLock for default-k8s-diff-port-406000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:30:35.857605   16804 start.go:364] duration metric: took 379.375µs to acquireMachinesLock for "default-k8s-diff-port-406000"
	I0819 11:30:35.857745   16804 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:30:35.857768   16804 fix.go:54] fixHost starting: 
	I0819 11:30:35.858535   16804 fix.go:112] recreateIfNeeded on default-k8s-diff-port-406000: state=Stopped err=<nil>
	W0819 11:30:35.858561   16804 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:30:35.864152   16804 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-406000" ...
	I0819 11:30:35.868015   16804 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:30:35.868206   16804 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/default-k8s-diff-port-406000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/default-k8s-diff-port-406000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/default-k8s-diff-port-406000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:8b:c9:88:6d:36 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/default-k8s-diff-port-406000/disk.qcow2
	I0819 11:30:35.877498   16804 main.go:141] libmachine: STDOUT: 
	I0819 11:30:35.877569   16804 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:30:35.877657   16804 fix.go:56] duration metric: took 19.892542ms for fixHost
	I0819 11:30:35.877671   16804 start.go:83] releasing machines lock for "default-k8s-diff-port-406000", held for 20.042917ms
	W0819 11:30:35.877840   16804 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-406000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-406000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:30:35.888041   16804 out.go:201] 
	W0819 11:30:35.892171   16804 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:30:35.892197   16804 out.go:270] * 
	* 
	W0819 11:30:35.894182   16804 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:30:35.903998   16804 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-406000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-406000 -n default-k8s-diff-port-406000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-406000 -n default-k8s-diff-port-406000: exit status 7 (50.857375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-406000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-406000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-406000 -n default-k8s-diff-port-406000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-406000 -n default-k8s-diff-port-406000: exit status 7 (34.835584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-406000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-406000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-406000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-406000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.96775ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-406000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-406000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-406000 -n default-k8s-diff-port-406000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-406000 -n default-k8s-diff-port-406000: exit status 7 (33.643875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-406000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-406000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-406000 -n default-k8s-diff-port-406000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-406000 -n default-k8s-diff-port-406000: exit status 7 (30.456875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-406000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-406000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-406000 --alsologtostderr -v=1: exit status 83 (44.634583ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-406000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-406000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:30:36.295107   16827 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:30:36.295271   16827 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:30:36.295275   16827 out.go:358] Setting ErrFile to fd 2...
	I0819 11:30:36.295277   16827 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:30:36.295405   16827 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:30:36.295629   16827 out.go:352] Setting JSON to false
	I0819 11:30:36.295637   16827 mustload.go:65] Loading cluster: default-k8s-diff-port-406000
	I0819 11:30:36.295841   16827 config.go:182] Loaded profile config "default-k8s-diff-port-406000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:30:36.301062   16827 out.go:177] * The control-plane node default-k8s-diff-port-406000 host is not running: state=Stopped
	I0819 11:30:36.305040   16827 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-406000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-406000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-406000 -n default-k8s-diff-port-406000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-406000 -n default-k8s-diff-port-406000: exit status 7 (28.948625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-406000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-406000 -n default-k8s-diff-port-406000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-406000 -n default-k8s-diff-port-406000: exit status 7 (29.580334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-406000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-921000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-921000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.188555208s)

                                                
                                                
-- stdout --
	* [newest-cni-921000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-921000" primary control-plane node in "newest-cni-921000" cluster
	* Restarting existing qemu2 VM for "newest-cni-921000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-921000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:30:41.467394   16889 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:30:41.467533   16889 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:30:41.467536   16889 out.go:358] Setting ErrFile to fd 2...
	I0819 11:30:41.467538   16889 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:30:41.467641   16889 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:30:41.468643   16889 out.go:352] Setting JSON to false
	I0819 11:30:41.484831   16889 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7208,"bootTime":1724085033,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:30:41.484890   16889 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:30:41.489712   16889 out.go:177] * [newest-cni-921000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:30:41.496681   16889 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 11:30:41.496734   16889 notify.go:220] Checking for updates...
	I0819 11:30:41.503770   16889 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:30:41.507691   16889 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:30:41.509077   16889 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:30:41.513010   16889 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	I0819 11:30:41.516644   16889 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:30:41.519961   16889 config.go:182] Loaded profile config "newest-cni-921000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:30:41.520221   16889 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:30:41.524652   16889 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 11:30:41.531616   16889 start.go:297] selected driver: qemu2
	I0819 11:30:41.531623   16889 start.go:901] validating driver "qemu2" against &{Name:newest-cni-921000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:newest-cni-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:30:41.531669   16889 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:30:41.533908   16889 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0819 11:30:41.533936   16889 cni.go:84] Creating CNI manager for ""
	I0819 11:30:41.533944   16889 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:30:41.533963   16889 start.go:340] cluster config:
	{Name:newest-cni-921000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-921000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:30:41.537475   16889 iso.go:125] acquiring lock: {Name:mk1182fa87ba49f1e009b3ded77c456c9e9e8e4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:30:41.545602   16889 out.go:177] * Starting "newest-cni-921000" primary control-plane node in "newest-cni-921000" cluster
	I0819 11:30:41.549613   16889 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:30:41.549628   16889 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:30:41.549640   16889 cache.go:56] Caching tarball of preloaded images
	I0819 11:30:41.549711   16889 preload.go:172] Found /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:30:41.549717   16889 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:30:41.549773   16889 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/newest-cni-921000/config.json ...
	I0819 11:30:41.550227   16889 start.go:360] acquireMachinesLock for newest-cni-921000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:30:41.550264   16889 start.go:364] duration metric: took 30.666µs to acquireMachinesLock for "newest-cni-921000"
	I0819 11:30:41.550274   16889 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:30:41.550280   16889 fix.go:54] fixHost starting: 
	I0819 11:30:41.550409   16889 fix.go:112] recreateIfNeeded on newest-cni-921000: state=Stopped err=<nil>
	W0819 11:30:41.550417   16889 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:30:41.554627   16889 out.go:177] * Restarting existing qemu2 VM for "newest-cni-921000" ...
	I0819 11:30:41.562907   16889 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:30:41.562940   16889 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/newest-cni-921000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/newest-cni-921000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/newest-cni-921000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:3f:2d:7c:cc:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/newest-cni-921000/disk.qcow2
	I0819 11:30:41.564947   16889 main.go:141] libmachine: STDOUT: 
	I0819 11:30:41.564968   16889 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:30:41.565000   16889 fix.go:56] duration metric: took 14.720416ms for fixHost
	I0819 11:30:41.565004   16889 start.go:83] releasing machines lock for "newest-cni-921000", held for 14.735417ms
	W0819 11:30:41.565011   16889 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:30:41.565047   16889 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:30:41.565052   16889 start.go:729] Will try again in 5 seconds ...
	I0819 11:30:46.567210   16889 start.go:360] acquireMachinesLock for newest-cni-921000: {Name:mkb7d95b6cb817ec0fc7f5acba3d0ea0d51c7584 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:30:46.567898   16889 start.go:364] duration metric: took 530.375µs to acquireMachinesLock for "newest-cni-921000"
	I0819 11:30:46.568078   16889 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:30:46.568098   16889 fix.go:54] fixHost starting: 
	I0819 11:30:46.568866   16889 fix.go:112] recreateIfNeeded on newest-cni-921000: state=Stopped err=<nil>
	W0819 11:30:46.568892   16889 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:30:46.578285   16889 out.go:177] * Restarting existing qemu2 VM for "newest-cni-921000" ...
	I0819 11:30:46.582244   16889 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:30:46.582559   16889 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/newest-cni-921000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19468-11838/.minikube/machines/newest-cni-921000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/newest-cni-921000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:3f:2d:7c:cc:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19468-11838/.minikube/machines/newest-cni-921000/disk.qcow2
	I0819 11:30:46.591892   16889 main.go:141] libmachine: STDOUT: 
	I0819 11:30:46.591954   16889 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:30:46.592036   16889 fix.go:56] duration metric: took 23.940167ms for fixHost
	I0819 11:30:46.592051   16889 start.go:83] releasing machines lock for "newest-cni-921000", held for 24.107083ms
	W0819 11:30:46.592222   16889 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-921000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-921000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:30:46.600304   16889 out.go:201] 
	W0819 11:30:46.604245   16889 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:30:46.604262   16889 out.go:270] * 
	* 
	W0819 11:30:46.606691   16889 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:30:46.614304   16889 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-921000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-921000 -n newest-cni-921000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-921000 -n newest-cni-921000: exit status 7 (69.442625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-921000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-921000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-921000 -n newest-cni-921000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-921000 -n newest-cni-921000: exit status 7 (31.151041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-921000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-921000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-921000 --alsologtostderr -v=1: exit status 83 (43.269541ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-921000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-921000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:30:46.800309   16903 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:30:46.800454   16903 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:30:46.800457   16903 out.go:358] Setting ErrFile to fd 2...
	I0819 11:30:46.800459   16903 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:30:46.800594   16903 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:30:46.800811   16903 out.go:352] Setting JSON to false
	I0819 11:30:46.800818   16903 mustload.go:65] Loading cluster: newest-cni-921000
	I0819 11:30:46.801031   16903 config.go:182] Loaded profile config "newest-cni-921000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:30:46.805878   16903 out.go:177] * The control-plane node newest-cni-921000 host is not running: state=Stopped
	I0819 11:30:46.809899   16903 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-921000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-921000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-921000 -n newest-cni-921000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-921000 -n newest-cni-921000: exit status 7 (30.73775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-921000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-921000 -n newest-cni-921000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-921000 -n newest-cni-921000: exit status 7 (30.527417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-921000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (80/258)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.31.0/json-events 7.37
13 TestDownloadOnly/v1.31.0/preload-exists 0
16 TestDownloadOnly/v1.31.0/kubectl 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.08
18 TestDownloadOnly/v1.31.0/DeleteAll 0.11
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.1
21 TestBinaryMirror 0.3
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
35 TestHyperKitDriverInstallOrUpdate 10.26
39 TestErrorSpam/start 0.39
40 TestErrorSpam/status 0.09
41 TestErrorSpam/pause 0.12
42 TestErrorSpam/unpause 0.12
43 TestErrorSpam/stop 10
46 TestFunctional/serial/CopySyncFile 0
48 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/CacheCmd/cache/add_remote 1.77
55 TestFunctional/serial/CacheCmd/cache/add_local 1.04
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
57 TestFunctional/serial/CacheCmd/cache/list 0.04
60 TestFunctional/serial/CacheCmd/cache/delete 0.07
69 TestFunctional/parallel/ConfigCmd 0.22
71 TestFunctional/parallel/DryRun 0.26
72 TestFunctional/parallel/InternationalLanguage 0.11
78 TestFunctional/parallel/AddonsCmd 0.1
93 TestFunctional/parallel/License 0.32
96 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
106 TestFunctional/parallel/ProfileCmd/profile_not_create 0.09
107 TestFunctional/parallel/ProfileCmd/profile_list 0.08
108 TestFunctional/parallel/ProfileCmd/profile_json_output 0.08
112 TestFunctional/parallel/Version/short 0.04
119 TestFunctional/parallel/ImageCommands/Setup 1.74
124 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
126 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.08
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
135 TestFunctional/delete_echo-server_images 0.07
136 TestFunctional/delete_my-image_image 0.02
137 TestFunctional/delete_minikube_cached_images 0.02
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 3.42
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.2
193 TestMainNoArgs 0.03
240 TestStoppedBinaryUpgrade/Setup 1.08
252 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
256 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
257 TestNoKubernetes/serial/ProfileList 31.49
258 TestNoKubernetes/serial/Stop 3.34
260 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
270 TestStoppedBinaryUpgrade/MinikubeLogs 0.79
275 TestStartStop/group/old-k8s-version/serial/Stop 2.03
276 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
286 TestStartStop/group/no-preload/serial/Stop 3.08
289 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
299 TestStartStop/group/embed-certs/serial/Stop 3.42
300 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
308 TestStartStop/group/default-k8s-diff-port/serial/Stop 2.08
311 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
317 TestStartStop/group/newest-cni/serial/DeployApp 0
318 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
319 TestStartStop/group/newest-cni/serial/Stop 2.89
320 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
322 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-203000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-203000: exit status 85 (95.371541ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-203000 | jenkins | v1.33.1 | 19 Aug 24 11:05 PDT |          |
	|         | -p download-only-203000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 11:05:30
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 11:05:30.823880   12321 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:05:30.824014   12321 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:05:30.824020   12321 out.go:358] Setting ErrFile to fd 2...
	I0819 11:05:30.824022   12321 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:05:30.824144   12321 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	W0819 11:05:30.824228   12321 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19468-11838/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19468-11838/.minikube/config/config.json: no such file or directory
	I0819 11:05:30.825644   12321 out.go:352] Setting JSON to true
	I0819 11:05:30.843572   12321 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5697,"bootTime":1724085033,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:05:30.843645   12321 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:05:30.849562   12321 out.go:97] [download-only-203000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:05:30.849709   12321 notify.go:220] Checking for updates...
	W0819 11:05:30.849732   12321 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball: no such file or directory
	I0819 11:05:30.853620   12321 out.go:169] MINIKUBE_LOCATION=19468
	I0819 11:05:30.856591   12321 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:05:30.861545   12321 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:05:30.864583   12321 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:05:30.867587   12321 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	W0819 11:05:30.873613   12321 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0819 11:05:30.873848   12321 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:05:30.877477   12321 out.go:97] Using the qemu2 driver based on user configuration
	I0819 11:05:30.877495   12321 start.go:297] selected driver: qemu2
	I0819 11:05:30.877509   12321 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:05:30.877577   12321 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:05:30.880492   12321 out.go:169] Automatically selected the socket_vmnet network
	I0819 11:05:30.885906   12321 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0819 11:05:30.885996   12321 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 11:05:30.886057   12321 cni.go:84] Creating CNI manager for ""
	I0819 11:05:30.886077   12321 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0819 11:05:30.886126   12321 start.go:340] cluster config:
	{Name:download-only-203000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-203000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:05:30.890106   12321 iso.go:125] acquiring lock: {Name:mk1182fa87ba49f1e009b3ded77c456c9e9e8e4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:05:30.894544   12321 out.go:97] Downloading VM boot image ...
	I0819 11:05:30.894570   12321 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso
	I0819 11:05:36.109649   12321 out.go:97] Starting "download-only-203000" primary control-plane node in "download-only-203000" cluster
	I0819 11:05:36.109676   12321 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0819 11:05:36.174326   12321 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0819 11:05:36.174349   12321 cache.go:56] Caching tarball of preloaded images
	I0819 11:05:36.175203   12321 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0819 11:05:36.179590   12321 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0819 11:05:36.179597   12321 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0819 11:05:36.278487   12321 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0819 11:05:41.823491   12321 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0819 11:05:41.823656   12321 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0819 11:05:42.518684   12321 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0819 11:05:42.518880   12321 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/download-only-203000/config.json ...
	I0819 11:05:42.518897   12321 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19468-11838/.minikube/profiles/download-only-203000/config.json: {Name:mk1a60e012ab2e3f16a9ea9e6707987cce6ee765 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:05:42.519134   12321 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0819 11:05:42.519310   12321 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0819 11:05:42.980121   12321 out.go:193] 
	W0819 11:05:42.986200   12321 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19468-11838/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106b83940 0x106b83940 0x106b83940 0x106b83940 0x106b83940 0x106b83940 0x106b83940] Decompressors:map[bz2:0x1400000e9c8 gz:0x1400000ea50 tar:0x1400000ea00 tar.bz2:0x1400000ea10 tar.gz:0x1400000ea20 tar.xz:0x1400000ea30 tar.zst:0x1400000ea40 tbz2:0x1400000ea10 tgz:0x1400000ea20 txz:0x1400000ea30 tzst:0x1400000ea40 xz:0x1400000ea58 zip:0x1400000ea60 zst:0x1400000ea70] Getters:map[file:0x140009fe550 http:0x14000756190 https:0x140007561e0] Dir:false ProgressLis
tener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0819 11:05:42.986228   12321 out_reason.go:110] 
	W0819 11:05:42.994085   12321 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:05:42.998062   12321 out.go:193] 
	
	
	* The control-plane node download-only-203000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-203000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-203000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (7.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-843000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-843000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=qemu2 : (7.372437708s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (7.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-843000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-843000: exit status 85 (84.175041ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-203000 | jenkins | v1.33.1 | 19 Aug 24 11:05 PDT |                     |
	|         | -p download-only-203000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 19 Aug 24 11:05 PDT | 19 Aug 24 11:05 PDT |
	| delete  | -p download-only-203000        | download-only-203000 | jenkins | v1.33.1 | 19 Aug 24 11:05 PDT | 19 Aug 24 11:05 PDT |
	| start   | -o=json --download-only        | download-only-843000 | jenkins | v1.33.1 | 19 Aug 24 11:05 PDT |                     |
	|         | -p download-only-843000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 11:05:43
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 11:05:43.420891   12351 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:05:43.421079   12351 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:05:43.421082   12351 out.go:358] Setting ErrFile to fd 2...
	I0819 11:05:43.421085   12351 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:05:43.421217   12351 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:05:43.422231   12351 out.go:352] Setting JSON to true
	I0819 11:05:43.438303   12351 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5710,"bootTime":1724085033,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:05:43.438373   12351 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:05:43.443119   12351 out.go:97] [download-only-843000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:05:43.443226   12351 notify.go:220] Checking for updates...
	I0819 11:05:43.446911   12351 out.go:169] MINIKUBE_LOCATION=19468
	I0819 11:05:43.450103   12351 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:05:43.454090   12351 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:05:43.455666   12351 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:05:43.459117   12351 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	W0819 11:05:43.465097   12351 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0819 11:05:43.465313   12351 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:05:43.468015   12351 out.go:97] Using the qemu2 driver based on user configuration
	I0819 11:05:43.468024   12351 start.go:297] selected driver: qemu2
	I0819 11:05:43.468028   12351 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:05:43.468076   12351 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:05:43.471014   12351 out.go:169] Automatically selected the socket_vmnet network
	I0819 11:05:43.476206   12351 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0819 11:05:43.476304   12351 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 11:05:43.476325   12351 cni.go:84] Creating CNI manager for ""
	I0819 11:05:43.476334   12351 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:05:43.476344   12351 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 11:05:43.476384   12351 start.go:340] cluster config:
	{Name:download-only-843000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-843000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:05:43.479708   12351 iso.go:125] acquiring lock: {Name:mk1182fa87ba49f1e009b3ded77c456c9e9e8e4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:05:43.483059   12351 out.go:97] Starting "download-only-843000" primary control-plane node in "download-only-843000" cluster
	I0819 11:05:43.483067   12351 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:05:43.548261   12351 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:05:43.548291   12351 cache.go:56] Caching tarball of preloaded images
	I0819 11:05:43.548502   12351 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:05:43.553571   12351 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0819 11:05:43.553578   12351 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0819 11:05:43.652025   12351 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4?checksum=md5:90c22abece392b762c0b4e45be981bb4 -> /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:05:48.239686   12351 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0819 11:05:48.240000   12351 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19468-11838/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-843000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-843000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-843000
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.3s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-041000 --alsologtostderr --binary-mirror http://127.0.0.1:51949 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-041000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-041000
--- PASS: TestBinaryMirror (0.30s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-110000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-110000: exit status 85 (60.015708ms)

                                                
                                                
-- stdout --
	* Profile "addons-110000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-110000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-110000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-110000: exit status 85 (63.650209ms)

                                                
                                                
-- stdout --
	* Profile "addons-110000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-110000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.26s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.26s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-240000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-240000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-240000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-240000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-240000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000 status: exit status 7 (31.840042ms)

                                                
                                                
-- stdout --
	nospam-240000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-240000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-240000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-240000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000 status: exit status 7 (30.111125ms)

                                                
                                                
-- stdout --
	nospam-240000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-240000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-240000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-240000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000 status: exit status 7 (29.7715ms)

                                                
                                                
-- stdout --
	nospam-240000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-240000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.09s)

                                                
                                    
x
+
TestErrorSpam/pause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-240000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-240000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000 pause: exit status 83 (42.358083ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-240000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-240000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-240000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-240000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-240000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000 pause: exit status 83 (39.79925ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-240000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-240000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-240000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-240000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-240000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000 pause: exit status 83 (38.865ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-240000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-240000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-240000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.12s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-240000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-240000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000 unpause: exit status 83 (39.685583ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-240000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-240000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-240000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-240000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-240000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000 unpause: exit status 83 (40.959666ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-240000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-240000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-240000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-240000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-240000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000 unpause: exit status 83 (39.797167ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-240000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-240000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-240000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.12s)

                                                
                                    
x
+
TestErrorSpam/stop (10s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-240000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-240000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000 stop: (3.55801625s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-240000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-240000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000 stop: (3.418618791s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-240000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-240000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-240000 stop: (3.016249208s)
--- PASS: TestErrorSpam/stop (10.00s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19468-11838/.minikube/files/etc/test/nested/copy/12317/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.77s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.77s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-924000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local2723833940/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 cache add minikube-local-cache-test:functional-924000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 cache delete minikube-local-cache-test:functional-924000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-924000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 config get cpus: exit status 14 (30.989666ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 config get cpus: exit status 14 (36.12025ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-924000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-924000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (122.774542ms)

                                                
                                                
-- stdout --
	* [functional-924000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:07:25.974761   12893 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:07:25.974915   12893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:07:25.974918   12893 out.go:358] Setting ErrFile to fd 2...
	I0819 11:07:25.974921   12893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:07:25.975054   12893 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:07:25.976106   12893 out.go:352] Setting JSON to false
	I0819 11:07:25.992258   12893 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5812,"bootTime":1724085033,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:07:25.992332   12893 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:07:25.996105   12893 out.go:177] * [functional-924000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:07:26.003958   12893 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 11:07:26.004030   12893 notify.go:220] Checking for updates...
	I0819 11:07:26.012961   12893 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:07:26.015976   12893 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:07:26.018942   12893 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:07:26.025919   12893 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	I0819 11:07:26.029886   12893 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:07:26.033191   12893 config.go:182] Loaded profile config "functional-924000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:07:26.033476   12893 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:07:26.037946   12893 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 11:07:26.044905   12893 start.go:297] selected driver: qemu2
	I0819 11:07:26.044910   12893 start.go:901] validating driver "qemu2" against &{Name:functional-924000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-924000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:07:26.044963   12893 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:07:26.050785   12893 out.go:201] 
	W0819 11:07:26.054902   12893 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0819 11:07:26.058939   12893 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-924000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-924000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-924000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (110.654375ms)

                                                
                                                
-- stdout --
	* [functional-924000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:07:25.857582   12889 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:07:25.857702   12889 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:07:25.857706   12889 out.go:358] Setting ErrFile to fd 2...
	I0819 11:07:25.857708   12889 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:07:25.857836   12889 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19468-11838/.minikube/bin
	I0819 11:07:25.859282   12889 out.go:352] Setting JSON to false
	I0819 11:07:25.876017   12889 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5812,"bootTime":1724085033,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:07:25.876115   12889 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:07:25.880362   12889 out.go:177] * [functional-924000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0819 11:07:25.888984   12889 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 11:07:25.889032   12889 notify.go:220] Checking for updates...
	I0819 11:07:25.894936   12889 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	I0819 11:07:25.897939   12889 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:07:25.900970   12889 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:07:25.903914   12889 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	I0819 11:07:25.906911   12889 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:07:25.910275   12889 config.go:182] Loaded profile config "functional-924000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:07:25.910519   12889 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:07:25.914835   12889 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0819 11:07:25.922006   12889 start.go:297] selected driver: qemu2
	I0819 11:07:25.922015   12889 start.go:901] validating driver "qemu2" against &{Name:functional-924000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-924000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:07:25.922089   12889 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:07:25.927959   12889 out.go:201] 
	W0819 11:07:25.931955   12889 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0819 11:07:25.935777   12889 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-924000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "45.710208ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "34.925917ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "45.971833ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "32.792667ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.710039583s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-924000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 image rm kicbase/echo-server:functional-924000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-924000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 image save --daemon kicbase/echo-server:functional-924000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-924000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.013193542s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-924000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-924000
--- PASS: TestFunctional/delete_echo-server_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-924000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-924000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.42s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-680000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-680000 --output=json --user=testUser: (3.423398208s)
--- PASS: TestJSONOutput/stop/Command (3.42s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-791000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-791000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (92.963792ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"aecc9e60-3ebc-4c73-813c-f3077ddc3e98","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-791000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"33a05e44-9c2b-4ed9-ba5b-ae1d7cf760ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19468"}}
	{"specversion":"1.0","id":"2d1dd720-c2b4-4182-b78c-20ed08e7856d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig"}}
	{"specversion":"1.0","id":"c2dfd783-b689-4a01-9860-4f86c62c26b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"4c3791fe-0faf-40f6-87b7-c8830184d559","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"dc648ed2-f064-4ad4-9b6a-3cf5d3aae23f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube"}}
	{"specversion":"1.0","id":"bd7056dd-4d61-4b65-968c-e19b779cb4cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f9fb5f8b-133b-4f3d-b14c-961df1f6cf88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-791000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-791000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-837000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-837000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (99.796416ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-837000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19468
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19468-11838/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19468-11838/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-837000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-837000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (44.516917ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-837000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-837000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.725280833s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.761395208s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-837000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-837000: (3.339656458s)
--- PASS: TestNoKubernetes/serial/Stop (3.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-837000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-837000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (44.652208ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-837000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-837000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-163000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-545000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-545000 --alsologtostderr -v=3: (2.032149833s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-545000 -n old-k8s-version-545000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-545000 -n old-k8s-version-545000: exit status 7 (53.72675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-545000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-732000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-732000 --alsologtostderr -v=3: (3.075144791s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-732000 -n no-preload-732000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-732000 -n no-preload-732000: exit status 7 (55.076292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-732000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-750000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-750000 --alsologtostderr -v=3: (3.417902875s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-750000 -n embed-certs-750000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-750000 -n embed-certs-750000: exit status 7 (56.004541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-750000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (2.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-406000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-406000 --alsologtostderr -v=3: (2.079961292s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (2.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-406000 -n default-k8s-diff-port-406000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-406000 -n default-k8s-diff-port-406000: exit status 7 (59.574709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-406000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-921000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-921000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-921000 --alsologtostderr -v=3: (2.887523583s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-921000 -n newest-cni-921000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-921000 -n newest-cni-921000: exit status 7 (58.46825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-921000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/258)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-924000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3557510356/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724090809610841000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3557510356/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724090809610841000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3557510356/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724090809610841000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3557510356/001/test-1724090809610841000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (58.198125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.67875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.3665ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.733541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.906083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.957833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.762416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 ssh "sudo umount -f /mount-9p": exit status 83 (48.558333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-924000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-924000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3557510356/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (10.25s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (11.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-924000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port32338699/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (62.395292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.672333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.660083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.354041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.482125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (81.907833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.538041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 ssh "sudo umount -f /mount-9p": exit status 83 (52.511334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-924000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-924000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port32338699/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (11.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (14.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-924000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup610762509/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-924000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup610762509/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-924000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup610762509/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 ssh "findmnt -T" /mount1: exit status 83 (85.840084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 ssh "findmnt -T" /mount1: exit status 83 (83.986041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 ssh "findmnt -T" /mount1: exit status 83 (85.701958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 ssh "findmnt -T" /mount1: exit status 83 (85.303584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 ssh "findmnt -T" /mount1: exit status 83 (85.720584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 ssh "findmnt -T" /mount1: exit status 83 (86.206208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-924000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-924000 ssh "findmnt -T" /mount1: exit status 83 (85.919375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-924000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-924000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-924000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup610762509/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-924000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup610762509/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-924000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup610762509/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (14.27s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-150000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-150000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-150000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-150000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-150000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-150000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-150000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-150000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-150000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-150000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-150000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-150000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-150000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-150000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-150000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-150000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-150000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-150000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-150000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-150000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-150000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-150000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-150000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-150000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-150000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-150000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-150000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-150000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-150000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-150000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-150000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-150000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-150000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-150000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-150000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-150000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-150000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-150000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-150000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-150000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-150000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-150000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-150000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-150000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-150000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-150000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-150000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-150000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-150000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-150000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-150000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-150000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-150000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-150000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-150000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-150000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-150000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-150000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-150000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-150000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-150000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-150000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-150000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-150000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-150000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150000"

                                                
                                                
----------------------- debugLogs end: cilium-150000 [took: 2.209700708s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-150000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-150000
--- SKIP: TestNetworkPlugins/group/cilium (2.31s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-266000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-266000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard